Inside the government’s war against deepfakes

WASHINGTON — Adult film star Raquel Roper was watching a trailer for popular Youtuber Shane Dawson’s new series focusing on the video manipulation trend known as deepfakes. Near the end of the video, she noticed a clip taken from one of her films — but it wasn’t her face depicted onscreen. It was Selena Gomez’s.  

Selena Gomez “deepfaked” onto Roper’s video.

Roper was shocked. “You’re taking … the artist’s work, and you are turning it into something that I didn’t want it to be. Selena Gomez has not given consent to have her face morphed into an adult video,” she said in a response video.

“There is no way for me to take legal action against this, and I think that’s what the scariest part of it all is.”

That may soon change. Members of Congress, government officials and researchers are in what some call an “arms race” against deepfakes and their creators, which could lead to legislation against this emerging technology.

The term “deepfakes” refers to videos that have used artificial intelligence techniques to combine and superimpose multiple images or videos onto source material, which can be make it look as if people did or said things they did not. The most widely reported instances of deepfakes include celebrity pornography videos like Roper’s or video manipulations of politicians.

Filmmaker Jordan Peele teamed up with Buzzfeed to create this PSA, brought you by Barack Obama, about the dangers of deepfakes. As of March 5, it has over 5 million views on Youtube.

Some, however, believe deepfakes are not as dangerous as reported. The Verge argued deepfake hoaxes haven’t yet materialized, even though the technology is widely available.

In December, Sen. Ben Sasse, R-Neb., introduced a bill criminalizing the creation and distribution of harmful deepfakes — the first federal legislation of its kind. The bill died at the end of the 115th Congress on Jan. 3, but Sasse’s office said he plans to introduce it again in the current session of Congress.

“Washington isn’t paying nearly enough attention…” Sasse said. “To be clear, we’re not talking about a kid making photoshops in his dorm room. We’re talking about targeting the kind of criminal activity that leads to violence or disrupts elections. We have to get serious about these threats.”

Bobby Chesney, director of the Robert S. Strauss Center for International Security and Law at the University of Texas School of Law, briefed House Intelligence Committee Chairman Adam Schiff on the issue. Chesney said the legal solution to the proliferation of harmful deepfakes would not be a complete ban on the technology, which he said would be unethical and nearly impossible to enforce. What is feasible, he said, would be a federal law that would further penalize state crimes many deepfake creators are already committing — whether it’s fraud, intentional infliction of emotional distress, theft of likeness or similar statutes.

Chesney said Sasse’s bill seems to have that purpose, but the problem will most likely not be solved with one piece of legislation.

Government agencies have also begun fighting the falsified videos through research into detection and protection against malicious deepfakes. The Defense Advanced Research Projects Agency began researching methods to counteract deepfakes in 2016 through its Media Forensics program, which has focused on creating technical solutions such as automatically detecting manipulations and providing information on the integrity of visual media.

The program manager, Dr. Matt Turek, said the goal is to create an automated system across the internet that would provide a truth measure of images and videos.

To do so, Turek said, researchers look at digital, physical and semantic indicators in the deepfake. Inconsistencies in pixel levels, blurred edges, shadows, reflections and even weather reports are used to detect the presence of a manipulation.

One of the researchers Siwei Lyu, an associate professor of computer science at the University of Albany, and his team have been looking into detection of deepfakes by exploring two signals he said the fakes often possess — unrealistic blinking and facial movement.

Lyu’s team is also attempting to stop the creation of deepfakes by inserting “noise” into photos or videos that would stop them from being used in automated deepfake software. This “noise,” he said, would most likely be added numbers or pixels inserted into the image, and would be imperceptible to the human eye.

An early prototype will be available on ArXiv, an electronic repository of scientific papers, by early March, he said. The tool could be used as an add-on plugin for users before they upload an image or video online, he said, or could be an add-in used by platforms like Instagram or Youtube to protect images and videos already uploaded by users.

Especially with Russian involvement in the 2016 election, Lyu said, protections are needed against malicious false videos affecting politics.

“So far, deepfake videos have been generated by individuals — a bigger organization hasn’t done it,” he said. “If there is sponsoring of this activity I think they will actually cause a lot of problems.”

Turek said the research project is expected to wrap up in 2020.

However many deepfake creators and tech experts are worried about overregulating the technology. Alan Zucconi, a London-based programmer, is a creator of deepfake tutorials posted on his website and Patreon. They teach people about deepfakes’ potential positive applications, he said, such as historical re-enactments, more realistic dubbing for foreign language films, digitally recreating an amputee’s limb or allowing transgender people to see themselves as a different gender.

Zucconi said the definition of a deepfake given in Sasse’s bill is wrong. The bill defines it as “an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.” But Zucconi said a deepfake is actually a specific product created by a process called deep learning that concerns artificial intelligence.

The solution to malicious deepfakes, he said, is education, which he said should include the topic of consent. Victims of deepfakes typically do not consent to their likenesses being used, he said, and women are disproportionately targeted. Zucconi said this points to a larger issue.

“Deepfakes are not the problem,” he said. “Deepfakes are the manifestation of something much more complex that we as a society need to address.”

Another creator of deepfakes, Youtube personality “derpfakes,” agreed, saying education would be a more effective solution than regulation. Creators of malicious deepfakes would “likely not be dissuaded by such things” as legislation, he said.

Derpfakes has focused on creating satirical deepfakes, such as inserting the actor Nicolas Cage into various pieces of pop culture. So far, derpfakes has inserted Cage into movie series like The Terminator, James Bond, Indiana Jones and Star Trek.

“My main goal with my deepfakes is to bring a smile to some faces and to show the world that deepfakes are not inherently a bad thing,” derpfakes said.

Raquel Roper, however, hopes for a law to prevent the spread of malicious deepfakes like her own stolen video.

“I feel so shocked that this is legal,” she said. “…the original video is a video that I charge people money for…. It is my product. You just need to be more aware that what you may think is just a joke or a fun little project for you, it can really affect people.”


USA Today Logo

Published in conjunction with