Janelle Shane @JanelleCShane Research Scientist in optics. Plays with neural networks. Avid reader, writer, and player of Irish flute. she/her. wandering.shop/@janellecshane Jan. 03, 2019 1 min read

"in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do."

summarized it very well:

Researchers were tipped off when the algorithm not only did suspiciously well at converting maps to satellite images, but was able to reproduce features like trees & cars that weren't in the maps at all.

In fact, it appeared not to be looking at the maps at all when reconstructing satellite images. It could hide the original satellite data in maps of completely different scenes, and still get the original image back.

Technically, the algorithm did what they asked. Literal-minded but also following the path of least resistance.

This is one reason why machine learning algorithms are prone to bias: technically, their job was "copy the humans". It's not their fault the humans in their training data were being all biased.

Machine learning algorithms will often AMPLIFY the bias in their training data. From their perspective, reproducing racial and/or gender bias is a handy shortcut toward their goal of "copy the humans".

Given that most training data will contain bias, the tendency of algorithms to copy and amplify bias is a huge issue. For more reading:

You can follow @JanelleCShane.


Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.

Threader is an independent project created by only two developers. The site gets 500,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Running this space is expensive and time consuming. If you find Threader useful, please consider supporting us to make it a sustainable project.