Andrew Sever is the CEO of Sumsub, a full-cycle verification platform that secures the whole user journey.
The advancing capabilities of AI-generated fraud, such as deepfakes, have turned into an alarming threat to the financial sector in recent years. In 2023 alone, the total number of publicly-circulating audio and video deepfakes reached 500,000.
Despite that, the industry manages to confront the issue by implementing more sophisticated systems that, among other tools, also use AI-powered solutions. Yet, as we see today, the negative impact of deepfakes goes far beyond banks and fintech companies. So, let’s dive deeper into the challenges deepfakes pose to society—and the steps it can take to mitigate the consequences.
Deepfakes’ Influence On Politics And Beyond
The world of politics is flooded with misinformation, and deepfakes certainly aren’t helping. Nowadays, it’s easy to create highly convincing videos of politicians saying things they never said. This includes a video that was published during this year’s election in Turkey, where incumbent mayor Ekrem Imamoglu praised Recep Tayyip Erdogan for his achievements in Istanbul—something Imamoglu, as an opposition candidate, would not do.
As over 60 countries are holding elections this year (that’s roughly half of the world’s population), expect the deepfake problem to only get worse. My company’s internal findings based on millions of verification checks worldwide prove that.
In the U.S., just several days after Kamala Harris replaced Joe Biden on the Democratic Party ticket, the internet was flooded with deepfakes of her—one of which was shared by tech billionaire Elon Musk on his X account. Similarly, fake images of Donald Trump in handcuffs circulated earlier this year, with many convinced of their authenticity.
In this regard, Ardi Janjeva, research associate at The Alan Turing Institute, shared the following warning: “Even if we are uncertain about the impact that deepfakes have on voting behaviour, this distortion may be harder to spot in the immediate term and poses long-term risks to our democracies.”
That said, deepfakes go far beyond targeting politicians. Today, everyone’s identity can be imitated and used both for fraud in financial industries and simply to destroy one’s reputation.
Recently The New Yorker published an article about phone scammers that managed to falsify the numbers of their victims’ loved ones and perfectly imitate their voices with AI. In one of the stories, a woman and her husband got tricked into sending several hundred dollars to a criminal pretending to be her mother-in-law. And cases like that occur more often, catching victims unguarded.
What AI Regulations Propose (And Why It’s Not Enough Yet)
Regulators all over the world are taking proactive steps when it comes to fighting AI-generated fraud. For example, the EU entered into force the EU AI Act, and the U.K. recently released the white paper “A pro-innovation approach to AI regulation.” If we look at the U.S., there is still no regulation that covers the entire country, but several states, such as Texas and California, have passed laws on deepfakes.
However, all these actions from different regulators are still pretty novel, and often enough, they don’t provide a complete set of actions for companies. Therefore, regulators should take a more holistic and proactive approach, cooperating with each other and businesses.
On top of that, as the examples above show, no industry is immune to AI-generated fraud. So, each regulatory authority as well as companies should consider deepfakes a serious threat, even when it doesn’t seem so yet.
How Deepfake Detection In Businesses Can Apply To Other Spheres
Businesses in the financial sector fight fire with fire by employing AI-based defenses against AI-powered attacks. AI helps them analyze vast databases, recognizing atypical patterns and anomalies indicating fraud. This, for example, happens when a criminal is trying to upload an AI-generated image during the initial verification at a fintech company. Advanced deepfake detection solutions can easily spot such manipulations.
Open-source deepfake detection solutions utilize various methods, such as analyzing texture and semantic features against extensive image datasets, focusing on anomalies in different regions of an image, employing multi-modal approaches to examine image layers, and recognizing fraud patterns.
The issue with these models is that they are not always 100% accurate, while commercial deepfake detection solutions developed by specialized anti-fraud companies may be way more efficient, but remain unavailable for open scientific analysis or testing.
So the best approach for media and social media platforms in fighting against deepfakes will be to implement solutions available to them—either relying on open-source tech or opting for commercial providers. This way, they’ll be able to detect deepfakes and mark AI-generated content as such for users—which should be applied to various formats:
• Pictures
• Videos
• Podcasts and audio messages
The Future Of AI And Deepfakes
By looking at the existing trends, we can only expect the deepfake problem to get worse, and confronting it requires a complex approach. Businesses are exposed more to this threat since scammers target them first, so they are obliged to find ways to fight AI fraud. While they are not always successful, they implement the best solutions available, so other sectors should follow their example in the ways they set up defenses against deepfakes.
In particular, if we talk about the media industry, it should focus on the following aspects:
• Mark AI-generated content and warn users about accounts with questionable authenticity.
• Educate users to spot the signs of AI-driven misinformation; raise awareness about the risks when using the platform.
• Pay attention to the latest regulatory developments in AI to discern best practices for their sector.
• Implement AI-driven solutions, such as liveness (facial biometrics) checks for users and deepfake detection tools.
By looking at the efforts the financial sector puts into thwarting fraudsters, we can hope that with the same approach implemented more broadly, common users will be protected from AI-generated misinformation.
The challenge of today is to find a proper combination of steps, which will include the technological advancements of deepfake detection, regulatory practices developing in line with the real-life challenges faced by companies and wider society, and a higher level of awareness among the general public.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?