The Future Will Be Synthesised – E2: Deepfakes for disinformation

The Future Will Be Synthesised

About the author

Picture of Thomas Graham

Thomas Graham

Share This Post

At Metaphysic we are very proud that our own Head of Policy and Partnerships, Henry Ajder, is presenting a BBC Radio 4 and BBC Sounds documentary called “The Future Will Be Synthesised

Episode 2: Deepfakes for disinformation 

Today Henry interviews an amazing cast of expert commentators:


This episode covers the following important discussion:

Ever since the 2018 mid-term elections in the US, people have been sounding the alarm that a deepfake could be used to disrupt or compromise a democratic process. These fears have not yet come to pass, but recently deepfakes of Zelensky and Putin were deployed as the Ukrainian conflict escalated.

How much disruption did these deepfakes cause? How convincing were they? And are they an omen of things to come? Could deepfakes enhance disinformation campaigns that already cause significant harm?

Presenter and synthetic media expert Henry Ajder unpicks the most recent deepfake video and speaks to a journalist who reported on an unusual news report which used a deepfake news presenter to attempt to spread disinformation in Mali.

For those of you who can’t listen in, key points are outlined below:

Henry talks to Kateryna Fedotenko, journalist and presenter for news channel Ukraine 24, a few weeks after the strange video of Zelensky was released. Describing the clearest malicious use seen of deepfakes to spread disinformation so far, Kateryna states:

“[In the deepfake video], our president told us he is tired, he told our soldiers to give up and to not fight against Russia…Just after the appearance of this video on the social networks, hackers attacked the running line of our TV Channel. They had written a speech from our so-called president stating that we all have to give up”.

“The hackers not only took over Ukraine 24’s news ticker, where headlines run along the bottom of the screen, they spread a virus on the stations and computer systems. They also posted a screenshot of the deepfake video on the station’s website.”

Kateryna Fedontenko later describes Ukraine’s response and what they have done to defend themselves from future attacks.

“This was planned by the Russians and we had the information that they were preparing such a deepfake video in our investigation department…A few days before [its release], I was sharing the warning that it may happen. We were telling the people what deepfake is.”

Kateryna Fedontenko continues:

“We journalists and our viewers understand that Russian propaganda is a huge powerful machine…We have strengthened our protection and applied the necessary services to prevent such incidents from reoccurring. We work in the face of constant air alerts and constant missile and air strikes, which makes our work harder.”

Henry then speaks with Sam Gregory, who he has worked with on several investigations into deepfakes. Sam is Program Director of Witness, which helps people use video and technology to protect and defend human rights. Examples of his work can be found here.

“The Zelensky deepfake is really interesting because it’s almost the best case of how a deepfake can land in a complex conflict environment…A number of weeks before, the Ukrainian authorities start telling their citizens that there was likely to be a specific fake that involves Zelensky surrendering…Zelensky has incredible social channels and is incredibly trusted, and goes online to say ‘its not me’. Theses are all best case scenarios: you’ve warned people specifically, it’s a bad deepfake, and then you have the person in the situation able to speak credibly and debunk it. And it was also a complete slam dunk for the platforms because they have policies that are designed to say, you know, if a reasonable person could be deceived by a deepfake, we’ll take it down.”

Sam warns that this best-case scenario sets us up to be a little complacent.

Sam also discusses a video showing Putin announcing a peace accord that was released a few weeks before the Zelensky deepfake.

“The Putin video was first released in late February and was basically framed as satire. After the Zelensky video came out it was rediscovered. Without that satirical context, people were debating online, is this an attempt at disinformation?…The difference between satire and deception is quite fine when it comes to deepfakes, particularly when you lose the satire around them.”

Henry discusses another case with Catherine Bennett, a journalist for France 24 who debunked a deepfake video of a television news presenter that was shared primarily in Mali.

“The presenter in this video says that France gave money to various Mali political parties in order to support their continued military presence in the country. This was all happening against the background of France gradually withdrawing its troops from Mali, which was something that a lot of Malian people supported.”

“One of the first things we always do when we’re debunking images and videos that we see online is to do a reverse image search…We came across the same presenter in different videos talking about completely different things and speaking in different languages.”

Catherines Bennett continues:

“If you’re living in Mali, if you’re illiterate or if you’re not particularly savvy when it comes to online information and digital media…then it’s quite likely you could be taken in by that.”

Henry reached out to Synthesia, the company whose technology was misused to create the false video. Their founder, Victor Riparbelli, responded:

‘In this case, the user was banned 24 hours after. We have updated our system since then to be even more aware of news content. We are now building a process to vet new information. Right now everything goes through moderation”.

Metaphysic builds software to create incredible content with the help of artificial intelligence. Co-founded by Chris Umé, the leading creator of ethical hyperreal synthetic media and creator of @deeptomcruise, Metaphysic has been featured in many leading publications around the world – find out more on CNN, 60 Minutes and Business Insider. Metaphysic’s commitment to the evolution of ethical synthetic media is highlighted by our initiation and sponsorship of Synthetic Futures – a community dedicated to shaping a positive future for the technology in its many different forms. Find out more: and

For more info please contact or

At Metaphysic we are very proud that our own Head of Policy and Partnerships, Henry Ajder, is presenting a BBC Radio 4 and BBC Sounds documentary called “The Future Will Be Synthesised

More To Explore

Main image derived from

Detecting AI-Generated Images With Inverted Stable Diffusion Images – and Reverse Image Search

A new system for the detection of AI-generated images trains partially on the noise-maps typical of Stable Diffusion and similar generative systems, as well as using reverse image search to compare images to online images from 2020 or earlier, prior to the advent of high-quality AI image systems. The resulting fake detector works even on genAI systems that have no public access, such as the DALL-E series, and MidJourney.

Illustration developed from 'AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control' (

Powering Generative Video With Arbitrary Video Sources

Making people move convincingly in text-to-video AI systems requires that the system have some prior knowledge about the way people move. But baking that knowledge into a huge model presents a number of practical and logistical challenges. What if, instead, one was free to obtain motion priors from a much wider net of videos, instead of training them, at great expense, into a single model?

It is the mark of an educated mind to be able to entertain a thought without accepting it.