You can handle arbitrarily complex tasks with large parametric models trained with SGD. The problem is that doing it well requires a *dense sampling* of the input/output space you're learning, because the generalization power of these models is extremely weak. That's expensive.
Deep learning is immensely useful, but it does not bring meaningfully closer to understanding intelligence (as for "AGI", that's a sci-fi talking point, as even human intelligence is specialized). Intelligence is 100% about efficient generalization. DL is orthogonal to that.
Schematically, intelligence is skill divided by experience (I = S/E). Deep learning enables arbitrarily high skill levels, but requires insanely high amounts of "experience" (data) to achieve these levels, resulting in an extremely low intelligence factor.
Again, a DL model requires a dense sampling of what it's doing. An intelligent agent (like a human) can do extreme generalization from little data. At this time, no one has any clue how that works. However, it may not necessarily be very complicated. Who knows...
You can follow @fchollet.
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.
Enjoy Threader? Sign up.
Since you’re here...
... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.