The AI narrative swings between utopian dreams and dystopian nightmares, often overshadowing the nuanced reality of its current capabilities and limitations. As we stand on the cusp of widespread AI integration, it’s imperative to dissect the hype and uncover the true implications of this rapidly evolving technology.

Among the myriad concerns surrounding AI, one particularly unsettling claim is that it might lead to a world in which it’s impossible to distinguish truth from fabrication. This fear isn’t unfounded; the rise of sophisticated technologies like deepfakes and generative AI has democratized the creation of deceptively realistic content, putting powerful tools of manipulation within reach of the average user.

But does this technological leap truly herald an era where reality becomes indistinguishable from fiction? And if so, what are the ramifications for a society whose democratic foundations rest upon the bedrock of informed decision-making?

The Age Of Lies

When AI deepfake technology is used in Hollywood to de-age actors – for example, enabling Harrison Ford to once again play a young Indiana Jones – it’s harmless fun. But when the same technology is used to make it appear that political figures have spoken or acted in ways that they never would, it’s far more worrying.

Recent examples include deepfakes of both Kamala Harris and Donald Trump. As we approach this year’s US elections, the dangers of this are very obvious.

In another example, audio deepfakes have been used to “robocall” potential voters, urging them not to take part in elections, in a clear attempt to subvert democracy.

AI can also undermine the credibility of true information by making us wonder if it’s really a lie. Take, for example, Donald Trump’s recent claims that his opponent had artificially inflated the size of the crowd at her rally using AI.

Although it was quickly disproven, this seems to me to have been an attempt to leverage the “liar’s dividend” – seeding doubt due to the mere possibility that something could be fake, even when there’s no evidence of it.

This interference is by no means limited to the US elections. Deepfake footage of a candidate in last year’s Taiwanese election appearing to endorse his rivals was used in an attempt to discredit him.

Fake footage also appeared of a Moldovan election candidate threatening to make a popular drink illegal in order to protect the environment.

And in Bangladesh, deepfake videos showed an opposition politician wearing a bikini in public – an act that would likely be considered offensive in the Muslim-majority country,

This rise in AI-driven misinformation clearly has the potential to damage the public’s trust in democratic processes, and as these tools become more accessible and sophisticated, we can expect the problem to grow.

Reality Check

So, there’s obviously some truth to the claim that AI can blur the boundaries between truth and fiction. But does this necessarily mean that we’re headed towards a future where anything and everything we see online is potentially deceptive?

Well, while AI can obviously create very convincing fakes, there are technological limitations to what it can do. On close inspection, it’s often possible to detect where manipulation has taken place. There are often telltale signs such as unrealistic lighting, reflections or movements, or speech patterns and mannerisms that we can detect as irregular.

And while we may not always detect these at first glance, there are also technological solutions that can pick up on more subtle clues. For example, when video has been stitched together from different sources or generated entirely from scratch by algorithms. While the technology used to create deepfakes will undoubtedly become more sophisticated, so too will technology that’s capable of detecting them.

It’s also possible that regulation may play a role in reducing the threat posed by reality-bending AI. AI laws recently passed in both the EU and China, for example, effectively criminalize deepfakes when they are used to impersonate or spread disinformation. This is likely to be carried across into laws passed in other jurisdictions as time goes on.

But it’s likely that the best defense against the threat will come from education and a growing public awareness of the risks. Humans, after all, are a remarkably adaptable species, and our ability to critically assess what we see is likely to evolve as we become more used to being bombarded with fake content and disinformation.

Simple practices like checking facts and researching the credibility of sources before deciding whether we believe or disbelieve something we see online can go a long way to protecting against the threat that AI poses to the truth.

A little common sense can also go a long way—for example, asking ourselves, “Would this person really have said or done that?”

Navigating Truth And Fiction In An AI Future

While I believe that AI has the potential to make it harder to tell the truth from lies, the idea it will make it impossible is somewhat overblown.

Certainly, the risk that some people will act or base their beliefs on AI-generated misinformation is very real. There’s a need for continued vigilance and the ongoing development of methods—technological, legislative and sociological—to augment our ability to recognize what is real and what is likely to be fake.

I can see that it might become necessary to start teaching these critical thinking skills at an early age, such as in schools. After all, it seems likely that AI will play an increasingly important role in education, and it would make sense to ensure that identifying and understanding the risks is a part of that curriculum.

With the right tools, oversight and awareness, it should be possible to navigate the challenges that AI poses to truth, although it may mean making some changes to the way we think about and assess what we see and hear.

Share.

Leave A Reply

Exit mobile version