The Future

Facebook has more options for removing bad content than it will admit

The company's reaction to the Christchurch shooting betrays a lack of proactive action.

The Future

Facebook has more options for removing bad content than it will admit

The company's reaction to the Christchurch shooting betrays a lack of proactive action.
The Future

Facebook has more options for removing bad content than it will admit

The company's reaction to the Christchurch shooting betrays a lack of proactive action.

Following the Christchurch shooting, in which 50 people were brutally targeted by a far-right extremist who has since been charged and is awaiting trial, Facebook is once again being scrutinized for the role the platform played in the attack. The shooter streamed the event on Facebook Live for 17 minutes, with the first user report on the video coming in 12 minutes after the live broadcast ended. It was viewed a total of 4,000 times before being removed from the site.

Days after the attack, the company said it will consider shifting how users report live or recently live content. “As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review,” VP of product management Guy Rosen wrote in a blog post on Wednesday. As it currently stands, when users flagged the shooter’s live stream it was for “reasons other than suicide,” which may have delayed Facebook’s reaction to it.

In the blog post, Rosen claimed Facebook cannot time-delay its live streams the way broadcast TV does given the volume of videos posted to the site, that a time delay wouldn’t actually address the problem. Rosen also claimed a delay would even hinder the user reporting feature which they rely on to catch wind of troubling content.

Facebook continues to claim it has limited options for dealing with the horrifying content on its site, when its options are in fact infinite. The company can do whatever it wants; that it continues to pretend it can’t is laughable.

Rosen’s post makes sense given two unnecessary presumptions: one, that Facebook is obligated to transmit millions of livestreams from whoever chooses to post one; and two, that the streams must be visible to all of creation in order to be policed (“reported” on) by users. Neither of these are actually true. Facebook can limit streams however it wants; it can have as big or as small of a workforce as it wants or needs to police them. Allowing every stream to run until some user identifies an issue with it is a proactive choice, and Facebook is choosing the consequences of that choice, in this case, to the extreme detriment of its users.

Platforms including Facebook and YouTube have flatly refused to deal proactively with harmful content when it threatens the sheer size of the incoming content deluge. It’s not clear how many more times they will have to be undercut by favoring their business over their users’ comfort before they confront these choices for what they are.