The narrative around the rise of the alt-right and the prevalence of fake accounts on social media used in disinformation campaigns makes it seem as though these efforts came from nowhere, catching everyone unaware. But a report from Slate last week detailed exactly how black feminists on Twitter and their followers recognized and exposed an organized trolling campaign in the summer of 2014.
Even before the Russian Internet Research agency weaponized these tactics for the 2016 election, anonymous 4chan users spread #EndFathersDay through false-flag Twitter accounts, posing as black women to exacerbate fissures between feminists of color and white feminists as well as rile up conservative pundits. But few outside of the online community of black women realized at the time that this was a coordinated operation
Slate’s story, which details how users Shafiqah Hudson and I’Nasah Crockett spotted this activity, highlights just how critically social media platforms lack a suitable way to report disinformation and harassment. A similar thread can be pulled from the lack of reaction on Twitter’s part during the #EndFathersDay debacle (to counter that troll campaign, Hudson created a the hashtag #YourSlipIsShowing to expose bogus accounts in order to report or block them) and current complaints about moderation.
When platforms like Twitter have been asked to better moderate their sites, it usually means a the rollout of a tepid new policy or the platform dragging its feet to finally ban or delete a bad actor, like white nationalist Paul Nehlan. Possible AI solutions are lacking — AI isn’t able to navigate the nuances of different cultures — even people outside of a culture have a difficult time understanding the subtleties of a particular in-group. People who took the tweets under #EndFathersDay at face-value did believe those were actual black women rather than trolls doing a caricature of AAVE, according to Slate’s piece.
The results of not taking representation seriously can be deadly when bad actors are allowed to fester. For instance, politically motivated parties made liberal use of Facebook to target Rohingya Muslims in Myanmar last year, and Facebook, which had zero Burmese-speaking staff members until recently, was caught flat-footed.
The obvious answer is for platforms to have moderators and staff that reflect the widely diverse global population who use their product. Those who are marginalized can best understand what it looks like when a group is targeted, and if they are not listened to or given a place among the ranks of social media companies, tactics for spreading hate are going to remain highly effective.