François Chollet+ Your Authors @fchollet Deep learning @google. Creator of Keras, neural networks library. Author of 'Deep Learning with Python'. Opinions are my own. Jan. 23, 2019 1 min read + Your Authors

"Bias laundering" happens when we choose to ignore the biases of automated decision systems because of the illusion that all algorithms must be objective since they're "driven by math" or "run by a computer".

Algorithmic biases could be hard-coded by the implementer, or could come from a biased choice of features, or could come from biased data (all data being biased in some way), or could simply arise from spurious correlations (overfitting). Math/computers are a detail in the story.

In general, automated decision systems tend to inherit the biases of the human-driven process that they replace. Unfortunately, these biases start to acquire a veneer of objectivity, and become harder to inspect, or fix.

With humans at least, new generations bring change. Algorithmic bias may prove to be more entrenched than human-driven bias, due to the greater indirection and continuity brought by datasets and algorithms, as opposed to someone's judgment...


You can follow @fchollet.



Bookmark

____
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.

Since you’re here...

... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.


Follow Threader