Guillaume Chaslot @gchaslot AI for good. algotransparency.org/ / Univ Paris Est / Advisor at Center for Humane Technology / Ex-Google Jul. 14, 2019 3 min read

Thread

My first op-ed in @WIRED: how the AI feedback loops I helped build at YouTube can amplify our worst inclinations, and what to do about it.

 https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/ 

1/

Earlier this year a YouTuber showed how YouTube's recommendation algorithm was pushing thousands of users towards sexually suggestive videos of children, used by a network of pedophiles.

YouTube bans sexual videos. What happened?
 https://www.youtube.com/watch?v=O13G5A5w5P0 
2/

At YouTube, we designed the AI for engagement. Hence, if pedophiles spend more time on YouTube than other users, the job of the AI will become to try to *increase* their numbers.

3/

Despite companies like Nestlé and Disney removing their ads from YouTube, the problem was not completely fixed: last month the @nytimes showed how the recommendation engine was still promoting those videos! 4/

 https://www.nytimes.com/2019/06/03/world/americas/youtube-pedophiles.html 

This second time, YouTube reacted more strongly.

Let's take a look at the big picture. 5/

Recommendation systems have been shown by @DeepMindAI to give rise to "filter bubbles" and "echo chambers".

Are these content neutral? 6/

Even without understanding the complex AI, we can guess which filter bubbles will be favored. How? By looking at how engagement metrics create feedback loops. 7/

The feedback loop works like that:
1) The type of content that hyper-engaged users like gets more views
2) Then it gets recommended more, since the AI maximizes engagement
3) Content creators will notice and create more of it
4) People will spend even more time on it

7/

Eventually, hyper-engaged users drive the topics promoted by the AI.

Some of our worst inclinations, such as misinformation, rumors, divisive content generate hyper-engaged users, so they often get *favored* by the AI. 9/

One example from last week:

Justin Amash said "Our politics is in a partisan death spiral". Is this "death spiral" good for engagement? Certainly: partisans are hyper-active users. Hence, they benefit from massive AI amplification. 10/
 https://www.washingtonpost.com/opinions/justin-amash-our-politics-is-in-a-partisan-death-spiral-thats-why-im-leaving-the-gop/2019/07/04/afbe0480-9e3d-11e9-b27f-ed2942f73d70_story.html?utm_term=.3c72ebc750ba 

AIs were supposed to solve problems, but they appear to amplify others. What should we do?

11/

Platforms acknowledged some of these problems, and they are taking action. Here's how. 12/

Mark Zuckerberg wrote this post to explain why @Facebook needs to demote "borderline content" 13/

 https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/ 

YouTube announced in January 2019 that they aim to reduce recommendations of harmful misinformation: 14/

But these are limited to specific types of harmful content, and go against the platform's business interest. Hence, the changes are likely going to be minimal.

15/

When I talked about these problems internally, some Googlers told me "it's not our fault if users click on **** ".

But part of the reason why people click on this content is because they trust @YouTube 16/

The culprit is that users overly trust @Google-@YouTube 17/

Recommendations can be *toxic*: they can gradually harm users, in ways that are difficult to see without access to large-scale data. 18/

Researchers in universities around the world don't have the right data to understand the impact of these AIs on society. 19/

For instance, researchers at the Oxford Internet Institute concluded this week: “Until Google, Facebook [...] share the data being saved on to their servers [...], we will be in the dark about the effects of these products on mental health”
 https://www.theguardian.com/commentisfree/2019/jul/07/too-much-screen-time-hurts-kids-where-is-evidence  20/

Conclusions

Users:

=> Stop trusting Google/YouTube blindly

Their AI is working in your best interest only if you want to spend as much time as possible on the site. Otherwise, their AIs may work against you, to make you waste time or manipulate you. 21/

Platforms:

=> Be more transparent about what your AI decides
=> Align your "loss function" on what users really want, not pure engagement 22/

Regulators:

=> Create a special legal status for algorithmic curators
=> Demand some level of transparency for recommendations. This will help understand the impact of AI, and boost competition & innovation

IBM advocated for legislation:
23/

Here's the full article for more details:
 https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/ 

24/


You can follow @gchaslot.



Bookmark

____
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Threader is an independent project created by only two developers. The site gets 500,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Running this space is expensive and time consuming. If you find Threader useful, please consider supporting us to make it a sustainable project.