VERITAS Technology


Camera apps have become increasingly sophisticated. Users can elongate legs, remove pimples, add on animal ears and now, some can even create false videos that look very real. The technology used to create such digital content has quickly become accessible to the masses, and they are called “deepfakes.”

Deepfakes refer to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real. In fact, anybody who has a computer and access to the internet can technically produce deepfake content.

Such videos are becoming increasingly sophisticated and accessible. Deepfakes are raising a set of challenging policy, technology, and legal issues.

What are deepfakes?

The word deepfake combines the terms “deep learning” and “fake,” and is a form of artificial intelligence.

In simplistic terms, deepfakes are falsified videos made by means of deep learning. Deep learning is “a subset of AI,” and refers to arrangements of algorithms that can learn and make intelligent decisions on their own.

But the danger of that is the technology can be used to make people believe something is real when it is not. The technology can be used to undermine the reputation of a political candidate by making the candidate appear to say or do things that never actually occurred. Deepfakes are powerful new tools for those who might want to (use) misinformation to influence an election.

How do deepfakes work?

A deep-learning system can produce a persuasive counterfeit by studying photographs and videos of a target person from multiple angles, and then mimicking its behaviour and speech patterns.

Once a preliminary fake has been produced, a method known as GANs, or generative adversarial networks, makes it more believable. The GANs process seeks to detect flaws in the forgery, leading to improvements addressing the flaws. After multiple rounds of detection and improvement, the deepfake is completed.

According to a MIT technology report, a device that enables deepfakes can be “a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections.”

In fact, “AI tools are already being used to put pictures of other people’s faces on the bodies of porn stars and put words in the mouths of politicians,” wrote Martin Giles, San Francisco bureau chief of MIT Technology Review in a report. GANs didn’t create this problem, but they’ll make it worse.

How to detect manipulated videos?

While AI can be used to make deepfakes, it can also be used to detect them. With the technology becoming accessible to any computer user, more and more researchers are focusing on deepfake detection and looking for a way of regulating it.

Large corporations such as Facebook and Microsoft have taken initiatives to detect and remove deepfake videos. The two companies announced earlier this year that they will be collaborating with top universities across the U.S. to create a large database of fake videos for research, according to Reuters.

Presently, there are slight visual aspects that are off if you look closer, anything from the ears or eyes not matching to fuzzy borders of the face or too smooth skin to lighting and shadows.

But detecting the “tells” is getting harder and harder as the deepfake technology becomes more advanced and videos look more realistic.

Even as the technology continues to evolve, detection techniques often lag behind the most advanced creation methods. The real question is will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?