Fake (sometimes called synthetic) content has exploded in the past few years, with Amazon research suggesting that up to 60 percent of the content on the internet is AI-generated.
Not all of this is “deepfake,” which specifically means content designed to look really misleadingly.
In recent years we’ve seen deepfakes used to manipulate elections, commit fraud and theft, and create pornographic images of people without their consent.
These problems are only likely to get worse as AI tools become more powerful and accessible. Grok 3, created by Elon Musk’s xAI, for example, is now free for all X (Twitter) users and can be used to make convincing deepfakes of real people.
There are lots of tools available that can protect against deception or abuse perpetrated by deepfake. AI-based detection tools can detect, with varying levels of success, whether content has been digitally generated or manipulated. Multi-factor authentication and other digital identity verification tools can also protect against social engineering attacks.
But deepfake technology is evolving and improving so quickly that this isn’t enough, which is why education, critical thinking and awareness are equally, if not more important if you want to take steps to protect yourself.
Critical Thinking And Awareness
Technology can help protect us against deepfakes, but it doesn’t provide all the answers, and it isn’t 100 percent reliable.
This is why I believe that the best defense against being harmed by AI-generated disinformation is the human skills of contextual awareness and critical thinking.
Businesses need to be training employees today to be able to assess content and information critically. And as individuals, we should understand how and why those with bad intentions will try to influence our thoughts and choices – including our democratic choices.
Deepfakes create unique challenges that require some specific skills to navigate:
Fact-checking and verification are two obvious ones – today, it’s more vital than ever to check our sources and determine where information is really coming from. Facebook has developed some features that warn users when information might be deceptive, such as when it comes from sources that seem to try and hide their identity.
However, it has also been criticized for removing its fact-checking functions, which could be seen as a backward step.
Contextual awareness can often be as simple as asking, “Does this make sense?”
Being aware of the context and situation probably helped many people in 2022 realize that this video of Volodymyr Zelensky surrendering to Russia was not genuine.
The employee of the engineering firm Arup, who was duped into transferring $25 million to scammers using a deepfake of his boss’s voice, might have been better prepared if they had been more aware of the unusual nature of the request.
Critical thinking skills help us ask who information is coming from and why they want us to have it. Disinformation content will often play on emotion in an extreme way to cause fear, anger or excitement. Luckily, we can be trained to spot when we’re being manipulated in this way.
So, three things we should learn to consider about any piece of content we consume are:
Is it trying to influence me?
Is it likely to have happened?
Can I trust the person telling me?
Hammering home the importance of this approach to critically thinking about content is vital to protecting us from the impact of deepfakes.
Adapting To A Post-Truth World
Deepfake technology is only going to become more sophisticated. We are likely to reach a point where even the best detection tools won’t always spot fake video, images or audio.
The human defensive mechanisms I’ve covered here – critical thinking, awareness and reasoning – are likely to remain viable defenses for longer.
Will regulation protect us? Governments in some countries, such as China and the EU, have already introduced regulations regarding deepfakes. Unfortunately, though, criminals don’t always follow the law.
So organizations need to react to this now to ensure everyone understands the procedures and policies around verifying and critically assessing information before acting on it.
For individuals, awareness of the dangers of AI misinformation, whether it’s trying to influence who we vote for, or who we give money to, should become second nature.
This should help to keep us on the right path in a future where we may not be sure whether we can trust what we see with our own eyes.