TW: racist and homophobic slurs.
I've just finished some very sophisticated #MachineLearning and #AI analysis which could really help @jack and his team to identify and cut down on racist and homophobic abuse on Twitter
I hereby license my work to the public domain. A thread 👇🏻
2/n Using a novel technique called "type stuff into the Twitter search box", we performed sentiment analysis against a large corpus, revealing hidden patterns suggesting possibly "racially tinged" sentiment
Figure A1: "fucking c a m e l j o c k e y s"
3/n Refining our "basically just search Twitter" opinion-mining algorithm, we detect certain sub-sentence level sentiments in Tweets which seem to direct hate speech against certain religions.
Figure A2: "muslim p i g s"
4/n With several refinements to our machine learning models, we were able to extract additional clusters of what might (to some observers) be classed as threats of violence and hate speech against protected classes.
Figure A3: "hope someone r a p e s her"
5/5 Conclusion: Sentiment analysis has historically been an extremely tricky problem for curbing abuse on social media platforms.
We understand that @jack and his team at Twitter are doing literally All They Can to solve this problem. We present novel techniques which may help.
Seriously, @jack, this can’t be that difficult. Get out of your ice bath, make some lunch, and do something about this low-hanging fruit.
Like, people literally calling for the lynching of a member of Congress, @jack.
The phrase “At Twitter scale” is doing a lot of work here, because it sets Twitter’s profit margins and Monthly Active Users metric as fixed constraints before we even begin talking about harms.
I know that @TEDchris was asking for questions for @jack
Twitter could start by permanently banning (based on email address) users who violate the ToS by threatening violence or calling for hate crimes.
It’s shocking that Twitter gives repeated warnings rather than outright banning these folks.
If Twitter wanted to show us that they care about threats of violence and targeted harassment, they could start by simply increasing the size of their Trust & Safety Team by a factor of 20x.
If Twitter wanted to, they could identify key accounts like @IlhanMN who are subject to ongoing high levels of abuse, and proactively police the hate speech and threats of violence in their mentions.
Black & brown women attract abusive comments like flypaper. Let’s start there.
There’s enough low-hanging fruit in reducing Twitter hate speech to feed @jack for 20 years.
Folks like of @cindygallop, @KimCrayton1 & @digitalsista have been suggesting easy solutions for years
Yet tech dudes are all “wOw tHiS iS a hArD pRoBLeM tHaT we’Ve nEvEr tHoUgHt aBoUt”
SHIT. Never mind, people. I hadn’t considered the problem of false positives.
Leaving aside your tone policing, which is problematic, you’ve said nothing substantive.
As 20-year infosec and applied #AI expert with 6 patents who has run Trust & Safety teams for major social networks,
“It’s complicated” is such a shallow critique.
No. Please LISTEN to what Black women are telling you. They’re not asking for a way to hide tweets to avoid being offended.
Hiding tweets from the victim does nothing to prevent racists from showing up at their work and shooting them.
Network effects ALSO WORK IN REVERSE. That's what many people fail to consider. It's why de-platforming is so effective, as @mekkaokereke eloquently explains.
A huge amount of online abuse is aided & abetted by a relatively small number of actors.
Nazi 14/88 memes just sitting on Twitter for the last 3 years. Tanya Tay is an influential right-winger with 26k followers.
"Fourteen Words, 14, or 14/88, is a reference to the fourteen-word slogan 'We must secure the existence of our people and a future for white children”"
This white supremacy stuff ain't subtle. Tanya Tay's husband Jack Posobiec, the Pizzagate dude who advocates for white supremacist terrorist violence, is still on Twitter, spouting hatred.
Still with a BLUE CHECKMARK. He's notorious for spreading 14/88 Nazi memes.
There ARE hard problems in curbing online hate speech.
But we mustn’t confuse “technically hard” with “technically easy, but requires us to rethink our profit margins because we’ve built our business model to explicitly treat harm as an externality”.
Twitter’s recent push to “improve your experience” on their platform falls woefully short.
Hiding tweets from Black people does nothing to protect them from all the other white supremacists who read those tweets and then act on them in real life.
Twitter's hate speech policy makes NO sense. I reported this account for tweeting calls for murder & violence against Muslims.
Twitter agreed he violated the ToS. Their response? They wipe that ONE tweet away, but they don't ban him. How many calls for violence are acceptable?
Hours after Twitter agreed that this account violated their rules against hate speech and abusive behavior (for advocating the murder of Muslims in Britain), he's RIGHT BACK AT IT.
This time he calls for publishing the address & children's names of EU's human rights chiefs.
You can follow @chadloder.