Detecting deepfakes

Image: Freepik/AI-Generiert

The Challenge

Deceptively realistic avatars attempting to withdraw large sums of money or proclaim politically controversial messages: powerful AI models can now create photos, videos, and audio recordings that are virtually indistinguishable from those of real people. Such deepfakes pose a significant societal challenge. Politicians and the media, for example, need to be able to rely on the authenticity of statements and visual documentation, while companies want to ensure that they are speaking with legitimate customers on the phone. Deepfakes also make law enforcement considerably more challenging: insurance companies, for example, are increasingly confronted with fake photos of purported damage. And during police interrogations, suspects can now more easily claim that incriminating image material is fake, because successful deepfakes can only be exposed with immense effort, or in some cases not at all.

Our Solution

At the CISPA Helmholtz Center for Information Security, researchers are developing methods to reliably detect AI-generated content. To do this, they use AI themselves: they train various models with particularly extensive datasets of real and fake media. Each of these systems specializes in distinct forgery strategies and detects, for example, errors in the shadows cast by objects, inconsistencies in facial expressions, or incongruent lip movements when people speak. When combined, these models are particularly effective. At the same time, the programs automatically trigger a reverse image search to check whether similar images are already circulating on the internet. Such images could have served as source material for deepfakes. The system also checks the metadata and embedded digital watermarks of media files. In tests with benchmark datasets, it detects 98 percent of all fakes – far more than conventional programs. Unlike them, the CISPA model also shows exactly why it considers a file to be fake: for example, it marks suspicious areas within images and provides explicit explanations for its assessment. This allows users of the tool to form their own independent opinion about the credibility of an image in real time.

How are we already benefiting from it today

In order to put this application into practice as quickly as possible, Philipp Dewald, Tim Walita, and Peter Stolz founded the spin-off company Detesia at CISPA. It is primarily aimed at users who critically depend on the authenticity of digital media, such as financial institutions, law enforcement agencies, and media companies. Suspicious files can either be uploaded to Detesia's web platform for analysis or integrated directly into the user's own IT system, which is recommended for highly sensitive content. Compared to other providers, such data is particularly well protected at Detesia because the programs run exclusively in data centers in Germany and are therefore subject to stringent data protection guidelines. In addition, they are secured by state-of-the-art cryptographic methods so that only authorized persons can access them. Detesia is continuously developing its program to keep pace with rapidly advancing forgery methods. The company is funded by the German Federal Ministry of Education and Research. It is currently implementing pilot projects with its first users, including law enforcement agencies, insurance companies, and journalists. The renowned research network Bellingcat, for example, uses the analysis tool to verify the authenticity of sensitive content that it discovers online or that is leaked to it.

As curious as we are? Discover more.