The Future

How platforms alter history

The rush to delete digital evidence of violent tendencies or ideologies is understandable, but what is the cost?
The Future

How platforms alter history

The rush to delete digital evidence of violent tendencies or ideologies is understandable, but what is the cost?

After Nasim Aghdam opened fire at YouTube’s San Bruno headquarters last week, injuring three and killing herself, social media platforms swiftly moved to scour her from the web.

Her four YouTube channels disappeared, replaced by a message saying they’d been removed for “multiple or severe violations” of the site’s policies. Instagram and Facebook both deleted her profiles as well. Even her personal website is gone.

This has become standard practice. Facebook and Instagram removed the profiles of Parkland shooter Nikolas Cruz, where he’d left violent and racist posts. The Facebook account of Sutherland Springs killer Devin Kelley was deleted as well. Omar Mateen’s Facebook profile, which he used to post a threat immediately before he attacked the Pulse nightclub, also disappeared.

Spokespeople for YouTube, Twitter, and Reddit didn’t reply to questions about this story. A spokesperson for Facebook said that it was the company’s policy to remove the accounts of people who commit atrocities.

What’s behind that impulse to delete? When it comes to acts of violence, taking down profiles may help stem the impulse to try and build a logical case for an act where logic played no role. Distancing a platform from a senseless act of violence may be a public relations move, or a matter of taste, or maybe meant to discourage a profile from becoming a shrine for copycats. But the act has rare precedence in less recent history, and even carries negative connotations with non-online examples — say, if no one were allowed to read the Unabomber’s letters.

Norms around social media are still evolving, and the urge to delete the profiles of wrongdoers has evolved over time. Surprisingly, the Twitter account of Dzhokhar Tsarnaev, one of the two brothers who bombed the Boston Marathon in 2013, was never taken down, even after he received the death penalty in 2015.

Some killers leave behind manifestos, but others don’t — Aghdam’s contained no such content that The Outline could ascertain — and their profiles get removed, too. A similar rush to delete plays out when networks of accounts are used to spread propaganda. Earlier this year, for instance, it emerged that Twitter had deleted more than 50,000 accounts it believed that Russian operatives used to interfere with the 2016 presidential election in the United States. Facebook has also removed hundreds of accounts and pages linked to Russian interference, as well as tens of thousands believed to have been a plot to upset the German election. Last month, Reddit co-founder Steve Huffman acknowledged that the site had removed “a few hundred” accounts linked to Russian disinformation.

In a practical sense, it’s impossible to delete all of a person’s digital footprints. Web caches and the Internet Archive leave behind ghost accounts that journalists pore over, looking for clues. Removing the live version of a profile just punches holes in the record, and people try to make sense of events after the fact with even less information. A list of profiles linked to Russia that Twitter provided to Congress eventually leaked, but the data excludes basic information like which other accounts the bots had followed. Services like Twitter presumably retain copies of what gets deleted, but they are rarely, if ever, made available for public digestion.

Josh Russell, an independent researcher who’s written for the Daily Beast about Russian interference in the U.S. election, wonders whether we’d have a fuller understanding of exactly how bad information spread during the campaign — and whether there were even more accounts linked to the propaganda networks than big tech companies have admitted — if the response to discovering misused accounts was to archive instead of delete.

“People have floated the idea of Twitter for example un-suspending the accounts, and locking them, then marking the profiles and tweets somehow,” he said.

Social networks would still need to make curatorial decisions, Russell cautions, like whether non-banned users would be able to undo a share of content posted by a locked account.

“But it would still be better than what we have to rely on now,” he said, which he characterized as digging through “little nuggets of info here and there.”

Tumblr, the blogging platform that Yahoo! bought in 2013, provided a glimpse of what that strategy might look like when it sent an email to users who had interacted with 84 accounts it suspected of being linked to a Russian disinformation campaign. Tumblr deleted the accounts, but left the reblog chains alive as a sort of online museum of disinfo.

Jon Christian is a contributing writer for The Outline.