"in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do."
summarized it very well:
Researchers were tipped off when the algorithm not only did suspiciously well at converting maps to satellite images, but was able to reproduce features like trees & cars that weren't in the maps at all.
In fact, it appeared not to be looking at the maps at all when reconstructing satellite images. It could hide the original satellite data in maps of completely different scenes, and still get the original image back.
This is one reason why machine learning algorithms are prone to bias: technically, their job was "copy the humans". It's not their fault the humans in their training data were being all biased.
Machine learning algorithms will often AMPLIFY the bias in their training data. From their perspective, reproducing racial and/or gender bias is a handy shortcut toward their goal of "copy the humans".
Given that most training data will contain bias, the tendency of algorithms to copy and amplify bias is a huge issue. For more reading:
You can follow @JanelleCShane.
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.
Enjoy Threader? Sign up.
Since you’re here...
... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.