The Future

Neither conservatives nor Mark Zuckerberg actually care about fixing Facebook

And why would they?
The Future

Neither conservatives nor Mark Zuckerberg actually care about fixing Facebook

And why would they?

Facebook is currently embroiled in so many high profile debates and scandals, it’s getting difficult to keep track. There’s the infuriating question of why InfoWars is still allowed on the site, despite the publisher’s numerous flagrant violations of Facebook’s rules on harassment, hate speech, and false news; why the company doesn’t seem to be able to take a strong stance on anything relating to genuine content moderation after years of clear evidence that it’s beyond necessary; why it is still twisting itself into knots in an attempt to court conservatives when their claims of persecution are so clearly bunk; and many many more. And while the specific answers to each may be infinitely nuanced and increasingly different from each other, they all share one commonality: ultimately, they don’t matter whatsoever.

Neither Facebook nor conservatives remotely care about actually fixing the systemic issues at play here — and why would they? There’s nothing better for their wildly different interests than further entrenching themselves in this messy, high profile wrestling match. For conservatives, it’s the ultimate rallying cry; fear and anger are the perfect motivators. Actually winning the war against “censorship” would mean losing a major talking point. For Facebook, it’s the perfect way to further consolidate their power. The fact that Facebook even has a seat at the table serves to normalize the very abnormal fact that a social media platform has somehow grown so fat and powerful to instigate this debate in the first place, and that it can continue existing at the nexus of our political and culture wars as long as it never takes too strong of a stance.

Facebook has grown to the size and scale where its core issues reflect those of society at large. At a certain point, to try and fix Facebook is to try and fix the problems created by shortening the distance between normal people and, say, Holocaust deniers to nothing at all. Since its lack of initiative to delete Infowars, Facebook has faced increasing scrutiny for giving a platform to people who are wrong, and perhaps willfully so, like Sandy Hook denialists.

In an interview with Recode’s Kara Swisher, Facebook CEO Mark Zuckerberg struggled somewhat to explain Facebook’s intentionality-based approach to policing content rather than anything more sweeping or general:

It’s hard to impugn intent and to understand the intent. I just think, as abhorrent as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public figures we respect do too, and I just don’t think that it is the right thing to say, “We’re going to take someone off the platform if they get things wrong, even multiple times.”

What we will do is we’ll say, “Okay, you have your page, and if you’re not trying to organize harm against someone, or attacking someone, then you can put up that content on your page, even if people might disagree with it or find it offensive.” But that doesn’t mean that we have a responsibility to make it widely distributed in News Feed…

During the hot mess that was Tuesday’s House Judiciary Committee hearing on social media filtering, spokespeople from Facebook, Google, and Twitter were pitted against a bunch of very mad conservatives and exasperated Democrats. Like those that have come before, Tuesday’s hearing was three hours of grandstanding and the nonsensical repetition of talking points from both sides of the courtroom, with little to no actual listening or productive debate. Conservatives yelled about Gateway Pundit losing likes on Facebook (among other things) while representatives from Facebook, Google, and Twitter cycled through statements that could have been copied and pasted directly from their companies’ last five years of press releases.

One of the more complicated parts of the debate over online content moderation centers around with Section 230 of the Communications Decency Act (which we promise you is significantly less dry of a topic than it sounds). Section 230 is basically the glue that holds the modern internet together. The legislation says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." It absolves platforms like Facebook and Twitter of any legal responsibility for all the dumb things their users post and do on their sites by enshrining them as merely hosts of other entities’ content rather than content creators or publishers themselves. It was also the subject of a particularly contentious exchange between Twitter’s Nick Pickle (yes, that’s his real name) and Florida Rep. Matt Gaetz during Tuesday’s House hearing.

Gaetz attempted to wrangle a ‘Gotcha!’ moment from the Twitter spokesperson by prodding him over whether or not the company could be protected by both the First Amendment and Section 230, regardless of the fact that — besides being protected from government infringements upon the companies’ rights to free speech — these platforms aren’t zones where the right to free speech standardly applies. Ostensibly because Gaetz (along with a significant percentage of the Congresspeople present at the hearing) believed the very act of a tech platform censoring a particular group or person based on the content of their speech should count as a form of speech itself, which in turn could disqualify the companies policing the speech from the protections allotted by Section 230.

But the issue is that that’s not how Section 230 works whatsoever. The legislation describes those protected under it as any “provider or user of an interactive computer service,” and it only gets more expansive from there.

“Section 230 protects [sites] from lawsuits from an aggrieved plaintiff about the content of the speech, but 230 says nothing about the provider's ability to censor or regulate the speech on their site,” said First Amendment issues expert and Executive Director of the Media Law Resource Center, George Freeman to The Outline in a phone interview. “And just like a newspaper publisher, it's really their decision as to what goes in the newspaper or not. So it's the decision of the internet company to decide what goes on its site. It has a perfect right to basically exclude or take down really anything it wants other than if it's being badly discriminatory.”

People are easily manipulated and often wrong, and they’re spending their days on a platform that is, at best, only just now fumbling its way to dealing with that, and the path to success isn’t at all clear. Facebook’s reticence up until now is, in a way, understandable: who the hell would want to be responsible for trying to sort out the relatively new and still-rapidly-evolving issue of tech acting as an accelerant to lies, which still travel faster and entrench themselves deeper than the truth? It is in no way the sole fault of a single entity, though companies like Facebook have undeniably only served to make the problem worse. Due to the sheer size and number of forces at work, it’s unclear there even is any solution, but to continue to gesture at one presumes that Facebook should be allowed to continue to be the one to manage the problem.

The Future

Content mods would solve an urgent problem, if anyone were willing to pay them

Facebook and YouTube need moderation more than ever, but are relying on manpower where we need nuance, training, and investment.
Read More