A special report from   |  With support from

ROOTING OUT THE DEEPFAKES

Stakeholders Must Work Together to Ensure Detection Technology Keeps Pace

Five years ago, computer scientist Hao Li in an optimistic TedX talk praised the limitless potential of computer-generated videos that replicate humans onscreen. 

Four years later, Li struck a more ominous tone at the World Economic Forum in Davos, Switzerland. Questioned about the dangers posed by artificial intelligence-synthesized media — better known as “deepfake” videos — Li acknowledged how nefarious actors have hijacked the technology to digitally insert celebrities into nonconsensual pornography, alter the words of politicians and threaten global markets.

Li, a distinguished fellow at the University of California, Berkeley, and CEO of Pinscreen, a Los Angeles-based artificial intelligence company, now says the technology is at a crossroads. The potential remains, but as the technology improves more risks emerge.

In its early stages, producing a few seconds of realistic video required hundreds of hours of work at a huge cost. Now basic cellphone apps allow us to swap faces in seconds, and deepfake videos, ranging from funny to dangerous, are ubiquitous.

Deepfake creators have used AI technology to graft women’s faces, including celebrities, seamlessly onto adult film performers’ bodies. One visual effects artist created a deepfake of Tom Cruise that was so convincing it drew 100 million views on TikTok.

Technology to identify and stop ill-intentioned deepfakes cannot keep pace. Siwei Lyu, a computer science professor at The State University of New York at Buffalo who developed some of the earliest deepfake detection software, said there are far more ideological and economic incentives to advance deepfake technology than to create software to detect them.

“The balance is like any other technology. Nuclear technology can be used to generate power, but can also be used to make an atomic bomb,” Lyu said. 

Stopping these “bad uses” while being mindful of free speech will require a multistakeholder approach that includes researchers, technology companies and government, Lyu said.

Facebook, in partnership with Microsoft and university researchers, launched a $10 million “Deepfake Detection Challenge” to spur development of better methods to detect AI video manipulation. Reality Defender, a nonpartisan, nonprofit organization brought together technologists, academia and media, including Microsoft, Google, University of California Berkeley and the Technical University of Munich, to create a tool that helps reporters, campaigns and researchers root out deepfakes.

In California and Texas, new laws criminalize deepfake political videos close to elections. And the US Congress this year passed laws requiring federal agencies to study the technology and provide regulatory recommendations.

“The legislative efforts are very welcome and timely for those extreme cases,” said Lyu, who testified before Congress in 2019. “But I think what the law should do is limit those worst cases (and) leave the middle ground.” —By Alan Gomez

A special report from   |  With support from