The Future

Experts fear face swapping tech could start an international showdown

Video forensic specialists are worried Deepfakes could have national security repercussions.

The Future

Worst case scenario

Deepfakes, a machine learning system that can be trained to paste a person’s face onto another person’s body, has experts concerned.
Some fear that, unchecked, the system could down the road be used maliciously to hoax governments and populations, or cause international conflict.
“I would say I’m not prone to hysteria or exaggeration, but I think we can agree that’s not entirely out of the question right now,” one said.
The Future

Experts fear face swapping tech could start an international showdown

Video forensic specialists are worried Deepfakes could have national security repercussions.

Earlier this month, at the Defence Advanced Research Agency’s (DARPA) Media Forensics program meeting, experts on digital video manipulation found themselves preoccupied by a Reddit community of Deepfake enthusiasts, which has been using neural networks to produce convincing face swaps — most of which, crudely, insert the likenesses of celebrities into pornographic films.

“We were all talking about this,” said Hany Farid, a professor of computer science at Dartmouth University who attended the gathering. “We started this program 18 months ago, and this wasn’t even on our radar screen. It wasn’t even close.”

Experts in forensic video were blindsided by Deepfakes, a machine learning system that can be trained to paste one person’s face onto another person’s body, complete with facial expressions. The effect isn’t yet more convincing than conventional computer graphics techniques, in Farid’s opinion, but it could democratize Hollywood-level special effects fakery — and, potentially, lead to a flood of convincing hoaxes.

Farid frets about the “nightmare situation of somebody creating a video of Trump saying ‘I’ve launched nuclear weapons against North Korea,’ and that video going viral, and before anyone gets around to realizing it’s fake, we have full-blown nuclear holocaust,” he said. “I would say I’m not prone to hysteria or exaggeration, but I think we can agree that’s not entirely out of the question right now.”

The idea of a doctored video sparking an international showdown isn’t entire hypothetical. In 2016, Farid pointed out, a Pakistani defense minister read an article on awdnews.com, a fake news site, which falsely claimed that Israel had threatened Pakistan with a nuclear strike. In response, the Pakistani official Tweeted an atomic threat of his own: “Israel forgets Pakistan is a Nuclear [sic] state too.”

And now bad actors have access to Deepfakes, which is becoming more advanced all the time. Shortly after Motherboard first reported on an early, difficult-to-use version of the software, an expert predicted that a more user-friendly version could appear within a year or two. In reality, it took just two months for a coder on Reddit to assemble an app that can produce face-swapped videos with no advanced machine learning knowledge.

Catalin Grigoras, the director of the National Center for Media Forensics, agreed with Farid that it’s still easy for experts to to spot modified footage by looking at telltale artifacts that the process leaves behind in a faked video — like blockiness, compression, and “fixed-pattern noise” — that don’t match the rest of the frame. But by combining Deepfakes with software like VoCo, an Adobe demo that processes a person’s voice in order to synthesize new utterances that they never actually said, he worries that miscreants will be able to create hoax clips that are effective enough to spread widely before experts can weigh in on their veracity.

There’s a long history, Grigoras points out, of manipulated imagery sneaking into the mainstream media. In 2006, a freelance Reuters photographer filed shots taken during the Lebanon War that he had edited to show more smoke after an airstrike, prompting the news organization to name a new chief photographer for the Middle East. In 2008, newspapers including the New York Times ran a photo of an Iranian missile test that had been manipulated to add an additional missile — and last year, the conservative blog Drudge Report posted yet another version of the same image which had been modified, readers pointed out, to include the visage of Jar Jar Binks, a character from Star Wars.

And in the long term, Farid has a structural concern about artificial intelligence research. Most work on machine learning, he said, has focused on what the technology can create — but if the internet of the future could turn out to be a wash of computer-generated hoaxes, maybe more resources should be dedicated to understanding how to defend against the material they produce.

“The reality is that the number of people working on the forensics side, like me, is relatively small compared to the number of people working on the other side,” Farid said. “We are greatly outnumbered and out-resourced. Google is not developing forensic techniques. Facebook is not developing forensic techniques. It's a bunch of academics. We’re outgunned.”