Janelle Shane @JanelleCShane Research Scientist in optics. Plays with neural networks. Avid reader, writer, and player of Irish flute. she/her. wandering.shop/@janellecshane Jan. 03, 2019 1 min read

"in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do."
 https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/ 

summarized it very well:

Researchers were tipped off when the algorithm not only did suspiciously well at converting maps to satellite images, but was able to reproduce features like trees & cars that weren't in the maps at all.

In fact, it appeared not to be looking at the maps at all when reconstructing satellite images. It could hide the original satellite data in maps of completely different scenes, and still get the original image back.

Technically, the algorithm did what they asked. Literal-minded but also following the path of least resistance.

This is one reason why machine learning algorithms are prone to bias: technically, their job was "copy the humans". It's not their fault the humans in their training data were being all biased.

Machine learning algorithms will often AMPLIFY the bias in their training data. From their perspective, reproducing racial and/or gender bias is a handy shortcut toward their goal of "copy the humans".

Given that most training data will contain bias, the tendency of algorithms to copy and amplify bias is a huge issue. For more reading:
 https://medium.com/@AINowInstitute/gender-race-and-power-5da81dc14b1b?linkId=60396849 


You can follow @JanelleCShane.



Bookmark

____
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Become member.