Jump directly to the page contents

Challenge #56

Debunking deepfakes, securing trust.

Deepfakes are images, videos, texts, or audio files that look deceptively real, but are in reality fake. They may already threaten our democracy in the near future. Our goal: to make sure people can trust their own eyes and ears.

Participating centers

Much of our communication has moved into digital space. But what if the things we see, hear, and read there are not real at all? People have already figured out how to use artificial intelligence to manipulate photos and videos, even managing to make machines imitate the voices of real people and generate texts that seem to make sense. While this technology has good potential, it also carries all kinds of risks. Therefore, it is important to make sure that we as humans will still be able to tell the difference between fakes and reality in the future.

This is why we are working on algorithms capable of detecting and exposing deepfakes. Given that the technology behind it is constantly evolving, we are continually developing our methods to spot and flag deepfakes so that they remain permanently distinguishable from genuine content. For example, we apply a digital watermark to machine-generated texts and embed special fingerprints in fake photos, thus assuring that they are recognizable as deepfakes In the long run, we will be able to develop new trust in reliable data.

(Header: Pixabay)

News and Views from the Helmholtz Community

Stay up to date with our newsletter “Helmholtz Monthly”!

Read the latest issue Subscribe to Newsletter

Participating centers

As curious as we are? Discover more.