François Chollet+ Your Authors @fchollet Deep learning @google. Creator of Keras, neural networks library. Author of 'Deep Learning with Python'. Opinions are my own. Sep. 11, 2019 1 min read + Your Authors

Humans have vanishingly little innate knowledge about the visual appearance of objects in the world. But we do have heightened sensitivity to certain textures or shapes characteristic of deadly animals (snakes & spiders mostly). This is evolutionarily ancient, not specific to us.

The reasons we have so little innate visual knowledge are interesting. Basically, any visual knowledge involves many bits of information, and has to be encoded via hardwiring connections in the visual cortex (or before). This is an extremely low bandwidth process.

Because it's so slow, it's only applicable for information that is stable over hundreds of millions of years. Very little of the visual world is stable over that time frame (e.g. the visual difference between male & female faces cannot be hardcoded because it changes too quickly)

Further, there needs to be strong evolutionary pressure associated with this information over this extremely long time horizon. Very little of the visual world involves life and death questions. But snakes and spiders must have been a major threat to our evolutionary ancestors.

Bonus gif: a cat reacting to an unexpected cucumber

Also, note that this is why the take "evolutionary innate knowledge is the human equivalent of pretraining in neural networks, see, humans are not data-efficient after all" is so incredibly braindead and ignorant

Humans come into the world with a lot of priors, but they are very specifically scoped, and they're very much unlike pretraining knowledge in neural networks. Most of them are metalearning priors. Babies don't come with pretrained ImageNet weights.

Crucially, our prior knowledge was not evolved in the past 500k years. It is very ancient and shared by many of our distant cousins (pretty much 100% shared by great apes in particular). It isn't what makes us special.

Not talking specifically about visual knowledge here -- this is true of all of our priors, including metalearning priors. These things take time to encode. Anything shorter than 500k years won't make a meaningful difference

You can follow @fchollet.


Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.

Since you’re here...

... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.