zeynep tufekci @zeynep Thinking about our tools, ourselves. @UNCSILS prof + @NYTimes writer. Newsletter: t.co/klJwJ6vkB5 Book: t.co/RmMouki39B Jan. 30, 2018 1 min read

My latest for the @nytimes. The Strava debacle shows that individualized "informed consent" is not sufficient for data privacy. Given the complexity, companies cannot fully inform us, and thus we cannot fully consent. Data privacy is more a public good.  https://www.nytimes.com/2018/01/30/opinion/strava-privacy.html 

With enough data, all data is "personally-identifiable" data. With enough data, machine learning can suss out undisclosed traits. When combined, data can reveal things beyond anyone imagined. Informed consent is not a workable model—let alone "click accept" tiny font legalese.

Yep. Let's monetize our data person my person is not a solution. It's like saying let's solve traffic congestion by letting the rich fly helicopters over cities. It will create more problems than it solves—and not just inequality.

Since Strava itself obviously couldn't foresee what their data would reveal in combination, I'm not going to put it in the user to somehow foresee all current potential uses--let alone what machine learning will bring in the future. Unworkable.

Thank you!

We aren’t informed or consenting.

A great thread from @feamster (a runner!) on the real benefits of the Strava heatmap as a public good. We don’t disagree on the challenges of informed consent or inadvertent future uses of data—but he convinced me there are better (bigger) examples.

You can follow @zeynep.


Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.