The Future

Content mods would solve an urgent problem, if anyone were willing to pay them

Facebook and YouTube need moderation more than ever, but are relying on manpower where we need nuance, training, and investment.
The Future

Content mods would solve an urgent problem, if anyone were willing to pay them

Facebook and YouTube need moderation more than ever, but are relying on manpower where we need nuance, training, and investment.

The perfect content moderator is all-powerful, yet inexpensive. The perfect content moderator was hired with care, by someone else, probably. The perfect content moderator is well-trained, but, of course, not on company time. The perfect content moderator follows the rules to a T, yet is keenly attuned to every nuance. The perfect content moderator is a lie.

Big tech companies like Facebook and YouTube rarely fork over the cash necessary to hire full-time salaried staffers to do “low-skilled” work like content moderation, despite the fact that it’s been at the core of most of the last year’s biggest scandals (and, you know, it literally involves looking at the most disturbing and awful content online). Instead they prefer to engage fleets of contractors and temporary employees outside the company from around the globe, expecting them to do nothing less than execute all the rules regarding cultural nuances in every country these services are present in. And though the Wall Street Journal reports that “pay rates for content moderators in the Bay Area range from $13 to $28 an hour,” it’s safe to assume that contractors in less affluent parts of the world earn significantly less. More moderation on these platforms is necessary when livestreams occasionally end up featuring murders and child exploitation goes viral, but chronic under-investment in this area will be difficult for these companies to overcome.

In May 2017, Facebook CEO Mark Zuckerberg announced the company would be adding 3,000 content moderators to its ranks (at that point already 4,500 strong). Five months later that figure was bumped up to 4,000. Within mere weeks the tally of intended new hires on the “safety, security, and product and community operation teams” more than doubled to a hefty 10,000. Soon after, YouTube did the same, announcing it would be “bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.” The most obvious interpretation here would be that the companies themselves would be hiring 10,000 new employees, but reality has unfortunately been much more messy.

In the weeks following Zuckerberg’s May 2017 announcement, job postings for the 3,000 content moderator positions were mysteriously absent from Facebook’s official careers page, suggesting that the company fell back on old habits and outsourced the work to a series of subcontractors. A December investigation by the Wall Street Journal found that many of the workers used for content moderation by companies like Facebook and YouTube come from the cubicle farms and call centers of India and the Philippines, which are often managed by outsourcing firms like PRO Unlimited Inc., Accenture PLC, and SquadRun Inc. Facebook requires moderators to use their personal accounts, rather than an administrative one, when engaging in content moderation, and only provide its moderators with a mere two weeks of training.

The unsuitability of this short training period for someone who’s supposed to clearly and consistently represent all of the company’s policies and beliefs thousands of times a day is only further emphasized by Facebook’s latest update to its community standards page. Though the guidelines published Tuesday morning are significantly more robust than before — notable differences include: “policy rational” sections, specific “do not post” lists (with tiers!), “warning screen” explainers, etc — the huge increase in complexity doesn’t bode well for Facebook’s stressed-out, overworked, undertrained contractors.

Facebook's guidelines for Hate Speech.

Facebook's guidelines for Hate Speech.

Sarah Katz, who worked as a content moderator for Facebook until October 2016 told the Wall Street Journal “she saw anti-Semitic speech, bestiality photos and video of what seemed to be a girl and boy told by an adult off-screen to have sexual contact with each other…” WSJ also noted that “she reviewed as many as 8,000 posts a day, with little training on how to handle the distress though she had to sign a waiver warning her about what she would encounter.”

Facebook’s disconcerting treatment of the class of “employees” so crucial to its daily functioning goes far beyond stressful workplace conditions. A June report by the Guardian revealed that not only did Facebook inadvertently exposed the personal information of some of its content moderators to suspected terrorists back in October 2016, but the issue remained unfixed for a month. “I don’t have a job, I have anxiety and I’m on antidepressants. I can’t walk anywhere without looking back.” one of the affected content moderators told the Guardian after fleeing his home country once he’d learned that seven users associated with an alleged terrorist group he’d banned from the site had accessed his personal Facebook profile.

Facebook’s content moderators weren’t exactly great at enforcing even the simpler set of rules they were stuck with for the last year or so. A recent investigation by Motherboard found that they were unable to identify public Facebook posts advertising highly sensitive personal information (like credit card and social security numbers) which were so easily available that reporter Lorenzo Franceschi-Bicchierai found them with a mere Google search. Of course, after Motherboard contacted Facebook about the posts, some of them were removed, but therein lies the issue. Facebook’s approach to content moderation isn’t proactive, it’s reactive. And sporadically reactive at best.

Take the case of Huffington Post reporter Jesselyn Cook, for example. In February, a private Facebook group that was roughly 72,000-strong began a campaign of targeted sexual harassment against her. Though she (and dozens of others) reported the harassment to Facebook, absolutely nothing was done...that is, until she decided to write about it for HuffPo and asked Facebook for comment on the whole hellish endeavor in an official capacity as a reporter. Then, of course, the group in question was taken down within hours.

On a day-to-day basis, it honestly isn’t all that surprising that Facebook’s moderators are this ineffective. It’s a couple thousand undervalued and overworked contractors against two billion users’ worth of disturbing content. Daily exposure to countless dick pics, beheadings, and beastiality takes its toll.

Facebook’s problems aren’t going to be solved by adding more moderators, or even by adding better rules. The people being hired to tame this behemoth are set up to fail, and the only thing that’s going to fix it is a systematic change. Companies like Facebook and YouTube have been able to succeed in their quests for rapid expansion by keeping their teams of engineers and designers almost infamously “happy” and well-paid, and with billions of users knee-deep in their platforms’ content, it only makes sense that the same sort of mindset should apply to those in control of it.