"in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do."
summarized it very well:
Researchers were tipped off when the algorithm not only did suspiciously well at converting maps to satellite images, but was able to reproduce features like trees & cars that weren't in the maps at all.
In fact, it appeared not to be looking at the maps at all when reconstructing satellite images. It could hide the original satellite data in maps of completely different scenes, and still get the original image back.
This is one reason why machine learning algorithms are prone to bias: technically, their job was "copy the humans". It's not their fault the humans in their training data were being all biased.
Machine learning algorithms will often AMPLIFY the bias in their training data. From their perspective, reproducing racial and/or gender bias is a handy shortcut toward their goal of "copy the humans".
Given that most training data will contain bias, the tendency of algorithms to copy and amplify bias is a huge issue. For more reading:
You can follow @JanelleCShane.