Recent advances in artificial intelligence have made creating convincing fake videos, or deepfakes, of real people possible. While the ethical implications and the creative capabilities of this amazing technology are only beginning to be explored, there are growing concerns that this technology could be used maliciously to ruin reputations and cause extensive damages.
Can you spot a fake video in which a famous person is saying things that are totally untrue? For example, if you are watching a viral video of President Barack Obama, can you tell whether it’s fake or authentic?
A computer vision researcher, Supasorn Suwajanakorn, recently gave a Ted Talk that showed how photorealistic deepfakes can be created and how to tell whether you are looking at the real stuff.
Inspired by the New Dimensions in Testimony project that enabled face-to-face interactions with a hologram of a real Holocaust survivor, Supasorn set out to develop a model that looks, talks and acts just like the real person.
The most recent AI techniques for video synthesis allow generation of fake videos from just a few still images. Such possibilities of AI video generators can be successfully leveraged in different business applications, especially in marketing and advertising.
To create a model of a real person, Supasorn’s team followed the following steps:
- collecting several photos and videos of a person;
- reconstructing a fine-detailed 3D face model from the collected images and videos without performing 3D scanning on the person;
- generating sharp facial textures and colors for a wide range of expressions;
- showing the machine the video footage of a particular person to teach it how to imitate unique mannerisms of this person;
- synthesizing the movements of the mount points by using a neural network to convert input audio into those mouth points;
- synthesizing the texture, fine-tuning teeth and other facial details, and blending it into the head and background from a source video.
The results are very intriguing, realistic, and … frightening. To assist in preventing the misuse of this technology, Supasorn is currently involved in developing tools that utilize a combination of machine learning and human moderation for easily detecting fake images and videos.
For example, his team plans to release a web browser plugin called Reality Defender. With the tool, you can spot fake content automatically, right in the browser, and minimize the damage of deepfakes.
If this in-depth educational content on generating fake videos is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material on computer vision.
If you want to see some examples of the fake videos generated with the presented approach, check out the full Supasorn’s talk below.
Interested in AI-based video generators? You can read our premium research summaries, where we’re featuring the most recent advances in video synthesis with a particular focus on marketing applications.
Enjoy this article? Sign up for more computer vision updates.
We’ll let you know when we release more technical education.