With AI making it increasingly easy for anyone to create truth-bending content, it’s widely said that we are now entering a “post-truth” era. This means it’s becoming increasingly difficult to tell whether what we see online is real or has been AI-generated by someone who wants to deceive us.
Without a doubt, fake video – sometimes known as deepfake or synthetic video – has the potential to be the most deceptive form of fake content. While video evidence used to be the gold standard of truth, even in legal matters, those days are now long gone.
Today, it’s easy to produce hyper-realistic fake videos that make it look like anyone has said or done anything using readily available tools and affordable hardware.
But that doesn’t mean we have to be defenseless. Here, I’ll outline the steps we can take to help us tell facts from fiction if we want to protect individuals, businesses, and society from the growing threat of fake videos.
The Dangers Of Deepfake Videos
Deepfake videos are an emergent threat of the AI era – meaning they pose risks that individuals, businesses and society have never had to face before. The ability to create synthetic video content that can fool people into thinking it’s real has the potential to influence public opinion and even destabilize democratic processes and institutions.
For example – in the run-up to this year’s U.S. presidential election, former Department of Homeland Security chief of staff Miles Taylor observed that hostile states intending to spread disruption no longer need to influence the vote itself. All they need to do is sow doubt that the process has been carried out fairly.
This isn’t just hypothetical. It was recently revealed that deepfake technology allowed a hostile actor to impersonate a top Ukrainian security official during a video call with a U.S. senator. Although the attempted deception was detected before damage was done, the dangerous nature of this near-miss is clear.
Ukraine was the target of another deepfake attack prior to this in 2022 when synthetic video footage of leader Volodymyr Zelensky appeared to show him surrendering and urging Ukrainians to lay down their weapons shortly after the war started.
These examples show the truly global scope of the disruptions that deepfake video could potentially cause. So, how do we go about protecting ourselves from falling victim?
Methods For Detecting Deepfake Video.
We can split the possible methods of identifying and mitigating the threat of deepfakes into four general categories. These are:
Detecting visual cues – this means spotting indicators that are visible to the naked human eye. This could be tell-tale irregularities and unnatural movements – particularly involving facial expressions – that just seem “off.” Inconsistent lighting and a fading or blurring of the boundaries between the faked elements of the video (such as mouth movements when lip-synching has been used) are all potential indicators.
Technological tools – this covers a growing number of software applications specifically designed to detect deepfake videos, such as Intel’s FakeCatcher and McAfee Deepfake Detector. These work by applying machine learning algorithms that can detect patterns or visual indicators that would be missed by the naked eye but show up clearly in a digital analysis of the source data.
Critical thinking – This involves checking sources and asking questions. Is the source of the video trustworthy? Is the content of the video likely to be true? Can you cross-reference it with other sources that provide coverage of the same event in order to establish the truth? And are there logical inconsistencies that seem at odds with what is realistically possible?
Professional Forensic Investigation—while beyond the reach of amateurs, larger organizations and law enforcement agencies can access specialized tools, often powered by the same neural networks used to create deepfakes. Forensic analysis involves trained investigators examining videos frame-by-frame for pixel-scale irregularities or using reverse image search to trace the original source of any footage used to create fakes. Professional investigators can also use biometric analysis to detect discrepancies between facial features that indicate manipulation.
Future Implications
So, what lies in store for us in a world where seeing is no longer believing?
With deepfakes now an inescapable fact of everyday life, it becomes the responsibility of individuals, businesses and government to make sure they have protective measures in place.
It’s clear to me that one implication is that precautionary measures, training, and the development of critical thinking skills among workforces should now be a part of any organizational cybersecurity strategy.
Employees should be taught to be on their guard and identify tell-tale signs of synthetic video, just as detecting and evading phishing attacks is now a standard practice.
We can also expect to see a growing reliance on authentication and verification systems. For example, it could become common for deepfake detection to be built into video conferencing tools to detect routine attempts to leach data from apparently confidential conversations.
Ultimately, our response must involve technological development, vigilance and education if we want to minimize the extent to which deepfake video becomes a destabilizing influence on our lives.