Safety

Facebook’s idiotic solution to revenge porn

The company’s latest proposal is cruel and potentially harmful to victims.

Safety

Facebook’s idiotic solution to revenge porn

The company’s latest proposal is cruel and potentially harmful to victims.
Safety

Facebook’s idiotic solution to revenge porn

The company’s latest proposal is cruel and potentially harmful to victims.

It sounds like something out of a dystopian novel: Want to prevent becoming a victim of revenge porn? Just send copies of your nudes to an internet megacorp for safekeeping. If they’re ever posted publicly, they’ll be nuked out of orbit.

Earlier this week, Facebook announced it is piloting a revenge porn tool in Australia — one that defies all logic about internet privacy by requiring potential victims to send their nude photographs to the company in the event of an emergency. If a user has reason to believe that someone else is about to share their explicit images on the platform without consent, they can submit the images to the platform via Facebook Messenger to preempt revenge posting.

Because stopping a malicious partner from sharing explicit images isn’t humiliating enough on its own, as part of this process, a potential victim has to do the last thing in the world they could possibly want to do in this situation — send their photos to a total stranger on Facebook’s Community Operations team for review. After the nudes are scrutinized appropriately, they’ll go through a round of hashing, a process that creates a fingerprint of the image that can be used to flag and takedown any matches found on the platform. Even though Facebook says the original photo will not be permanently stored, there are numerous privacy and security concerns raised by this intimate photo escrow service.

First, it is unclear how effective the tool will be at catching non-consensually shared images and removing them before they can cause harm. For the tool to work, the victim has to have access to the same images as the perpetrator. While this may be reasonable if explicit images were shared within the context of a messaging app where they could be saved by both parties, this specific technological solution cannot protect victims who may have been photographed when sleeping, drugged, or otherwise incapacitated. And if a photo is cropped, edited, or otherwise manipulated, it could potentially evade detection and may not be automatically flagged for takedown.

A potential victim has to do the last thing in the world they could possibly want to do: send their sensitive photos to a stranger who works at Facebook.

And then there is Facebook itself, which originated as Facemash, a Hot or Not-style website built by Mark Zuckerberg using photos taken without permission from Harvard’s internal social network. Once Facemash became Facebook, Zuckerberg marveled at the trove of personal information he had gathered. “Yeah so if you ever need info about anyone at Harvard. Just ask. I have over 4,000 emails, pictures, addresses, SNS,” he told a friend. “They trust me — dumb fucks.” Facebook was notoriously lax on privacy and security in its early days, and was repeatedly criticized for its confusing privacy settings; in 2009, the company rolled out changes that “are clearly intended to push Facebook users to publicly share even more information than before,” in the words of digital civil rights nonprofit Electronic Frontier Foundation. In 2010, Mark Zuckerberg famously argued that privacy was no longer a “social norm.” In 2011, the FTC sued Facebook for breaking its own promises about the privacy of users’ data and fined the company $220 million. Asking victims to send photos to a company with a track record of privacy violations goes against common sense. Motherboard also caught Facebook contradicting itself about how the revenge porn prevention service works; a spokesperson said twice that the photos would be blurred when human moderators reviewed them, which Facebook later clarified is not the case.

Facebook’s move could also make the revenge porn problem worse. You know how your bank warns you that, “We will never ask for password information in an email”? By creating an avenue solely for the intake of nude images, Facebook is introducing people to the idea that there might be some legitimate situations when a trusted source would need you to send it nudes. Internet con-artists are incredibly adept at counterfeiting legitimate emails from online services, impersonating employees and tricking people into handing them sensitive information that they are the biggest single threat facing people using the internet today. In the past, many criminals have had to invest significant time and resources on creating malware that steals photographs and enables them to target young women through social media platforms for blackmail and extortion. If this experimental process were rolled out to a larger audience, scammers would undoubtedly add impersonating Facebook employees and emails to their pre-existing tactics for stealing nude photos from people. Then there is the fact that Facebook has created an enticing target for hackers and rogue employees. Even if a perfect technical solution were created, there’s no guarantee that victims would be safe and secure from lax internal policies or external threats.

Facebook’s head of security, Alex Stamos, has gone on the defensive on Twitter in response to critics, noting that the company worked with advocates and that “We are aware that having people self-report their images carries risk, but it's a risk we are trying to balance” against the problem of revenge porn.

But in technology, there is always another option for solving big problems. In this case, those options could have included investing more resources in pornography detection, expanding copyright protection tools and policies to cover revenge porn, making penalties for uploading revenge porn more severe, hashing the photos on the victim’s local machine rather than on Facebook’s servers, or allowing users to report revenge porn in advance without having to upload an actual photo — anything that would keep users from having to send in their own compromising images while being threatened by a malicious partner. While countless women and LGBTQ victims of this sex crime have wanted ways to stop it from happening to others, this isn’t the right way to empower victims or restore agency. Asking potential victims to trust their most personal, intimate photographs to a faceless internet company in a time of crisis is asking too much... especially when that company has a questionable relationship with privacy.

The Future

God mode

Read More
Jessy Irwin is researcher and cybersecurity educator who teaches people to protect themselves and their data online.