On Tuesday, a series of videos created by artists and first reported on by Vice Motherboard showed a series of celebrities, Mark Zuckerberg and Kim Kardashian among them, gushing about how grossly they’ve profited from the data scraped from the apps they created. The videos are known as “deepfakes” — fairly seamless constitutions of available photographs plus passable audio that appear to be the real person saying and doing things. Until now, platforms have not taken a hard line against deepfakes. But what are Silicon Valley tycoons to do when their own reputations are the target of the “free speech” “marketplace of ideas” they have gone to such lengths to defend?
Silicon Valley has had to be dragged backwards into regulating the online cesspools it’s created through a commitment to “free speech” and “respect” for different points of view. Almost all the platforms, from Facebook to YouTube to Twitter, have balked at the idea of taking firm stances against notoriously disruptive presences like white supremacists and dedicated conspiracy theorists. In part this is because they have struggled with how to draw the lines in a way that doesn’t mean they create a lot of work for themselves or regulate a lot of gray-area discussion out of existence. But it’s also in part because, for a long time, growth and big numbers were the driving motivations of these platforms, and incensing people with radical fringe politics and trolling was a devastatingly effective way of getting those results.
For a long time, places like reddit or Facebook defended themselves as “marketplaces of ideas” where, ultimately, the truth would always win. Platforms are, in a sense, still trying to make this stance work: Rather than try to shut down blatantly fabricated content like the recently viral video of Nancy Pelosi delivering a speech while drunk, Facebook tried desperately to pile on “context” and “fact checks” in hopes of taking the wind out of the video’s sails. By most accounts, this did not work. But all of Facebook’s messaging around similar incidents still tries to assume a high moral ground, “nobly” taking on the cause of defeating the content that is tearing apart society with “reason” and “facts” and “logic.” This approach is plainly ineffective, yet platforms keep trying it.
The notion of making deepfakes of the people who allow them to exist — putting fictionalized thoughts in their mouths — is so brilliant that I’m enraged no one thought of it sooner. It not quite the same as, for instance, having a bunch of Nazis attack Jack Dorsey on Twitter to illustrate how bad the Nazi problem is; Jack Dorsey doesn’t personally need to be on Twitter that much, and, as the founder, has god-level moderation capabilities.
What is so magical about the Zuckerberg deepfake is that it attacks his personal public profile, a thing that all tech barons are deeply sensitive to, in a way that his ability to control his immediate environment cannot stop. Only a quantum shift in his approach to moderation in general would prevent something like the deepfake video from affecting him; if he’s truly committed to a discourse that strikes down fake information, he will not blink. More to the point, if he were to use his super-mod powers to crush the video, it will be obvious to the outside world he is treating himself as a special case.
Why have we wasted time making deepfakes of anyone but the people who refuse to take them seriously? Why haven’t we turned the full force of conspiracy media against the people who refuse to deal with it in any real way? It is an effective path to change mainly because most tech people do not take problems seriously until they are personally affected; this is why most startups have to do with the non-problems of incredibly rich people, like doing laundry or getting groceries delivered. We should work a lot harder to lay the full negative force of the bad parts of online directly on the people who allow them to exist, and just see how that pans out.