Deep learning is capable of performing reasoning tasks, in the sense that it's pattern recognition engine, and any task (including ones we would categorize as involving reasoning) can be cast as pattern recognition, given a sufficient dense sampling of the input/output space.
It's also technically possible to train deep learning models to perform specific symbolic tasks (say, arithmetic, sorting...) using *few* data points, if you hard-code that task in the architecture space (replacing learning with a prior)...
Neither "train on a dense sampling of the data" nor "practically hard-code a solution template and then tune a few parameters using SGD" are particularly good options.
You can follow @fchollet.
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.
Enjoy Threader? Sign up.
Since you’re here...
... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.