Deep learning is capable of performing reasoning tasks, in the sense that it's pattern recognition engine, and any task (including ones we would categorize as involving reasoning) can be cast as pattern recognition, given a sufficient dense sampling of the input/output space.
It's also technically possible to train deep learning models to perform specific symbolic tasks (say, arithmetic, sorting...) using *few* data points, if you hard-code that task in the architecture space (replacing learning with a prior)...
Neither "train on a dense sampling of the data" nor "practically hard-code a solution template and then tune a few parameters using SGD" are particularly good options.
You can follow @fchollet.
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.
Enjoy Threader? Sign up.
Threader is an independent project created by only two developers. The site gets 500,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Running this space is expensive and time consuming. If you find Threader useful, please consider supporting us to make it a sustainable project.