CYBER SECURITY

Weaponized Deepfakes Are Getting Closer to Reality 

By: Rik Ferguson, April 28, 2021, Read time: 2 min (779 words)

1. The real scenario

The first malicious use of video deepfakes may have been observed, making one of Trend Micro’s long-standing predictions a looming reality. At Trend Micro we’ve been keeping global businesses, governments and consumer customers secure for over three decades. One way we do that is by predicting where the next threats may come from and developing ways to mitigate them in advance.

For several years now we’ve been warning about the growing technical sophistication of “deepfakes” — AI-powered audio and video content designed to trick users into believing it’s the real thing. Unfortunately, events from just a few days ago could show that this technology has come a long way, very rapidly. While there is debate as to the accuracy of the account of this specific instance, it nonetheless demonstrates how far we have come. As this reality nears, the whole cybersecurity industry, regulators and governments need to prepare.

Positive steps continue to be taken by organisations such as the European Union and the FTC to curb possible malicious uses and abuses of AI via new regulations. It’s time for security leaders to build deepfakes into their threat modelling plans, and for the industry to follow our lead in researching ways to tackle these scams.

2. What are deepfakes?

To understand what deepfakes are, just consider the word: A combination of machine learning subset “deep learning” and “fake media.” There are slightly different ways to create this kind of content. But typically, deep neural network technology known as an “autoencoder” is trained to take a video input, compress it and then rebuild it. By doing this with video streams of two different faces and then swapping the streams, it’s possible to make Person A’s facial expressions and gestures mimic Person B’s. With this kind of trickery, you can effectively put words into the mouth of any individual.

This most recent attempt to do so shows us how far the technology has come in just a few short years. It came at the expense of lawmakers in the UK, Latvia, Estonia, Lithuania and the Netherlands, who were tricked into believing they were on video calls with Leonid Volkov, a close ally and Chief of Staff of Russian opposition leader Alexei Navalny. A screenshot posted by one of the politicians involved shows how accurate the fake video was.

Additionally, other Proof-of-Concept uses of deepfakes are emerging, raising questions of how else this technology might evolve in the future.


3. Sounding the alarm

The opportunities for spreading disinformation like this at the very highest levels of government are almost limitless for those able to wield effective deepfake technology. Perhaps even more concerning is that doctored videos could also be used by hostile states or extortion-seeking cyber-criminals to undermine voters’ confidence in candidates up for election.

That’s not all. As we explained back in 2018, the same technology could be used to support Business Email Compromise (BEC) attempts to trick finance team members into making large wire transfers to third parties. 

Just imagine a CEO connecting via Zoom to give the order. It could be enough to convince many employees. In fact, this has already worked with deepfake audio, which was used back in 2019 to trick a British executive into wiring hundreds of thousands out of the company. One of our predictions for 2020 was that deepfakes will in time become the next frontier for enterprise fraud.

4. What happens next?

Financially motivated extortion and social engineering, and influence operations aimed at destabilizing democracies, are just the start. One expert recently claimed that as AI technology becomes more advanced and ubiquitous, the power to create highly convincing deepfakes could be in every smartphone user’s hands by the middle of this decade. So what can we do about it?

From a corporate perspective, security teams need to be prepared. That means heeding the warnings like ours to understand how the technology could be leveraged for malicious purposes. We recently released a paper on uses and abuses of AI, in partnership with the UN and Europol, which makes a good starting point.

We are also looking further into the future to anticipate how AI and deepfakes could become part of the ongoing cyber arms race in a few years, so look out for the release of “Project 2030” at this year’s RSA Conference. For now, we need to get better at training employees to spot the fakes and use tools to do the same.

There’s a long road yet to travel with malicious AI, but by taking concrete steps to better understand these threats, we can gain a tactical advantage. Forewarned, as they say, is forearmed.

We use cookies
Cookie preferences
Below you may find information about the purposes for which we and our partners use cookies and process data. You can exercise your preferences for processing, and/or see details on our partners' websites.
Analytical cookies Disable all
Functional cookies
Other cookies
We use cookies to personalize content and ads, to provide social media features and to analyze our traffic. Learn more about our cookie policy.
Details I understand
Cookies