“There will one day spring from the brain of science a machine or force so fearful in its potentialities, so absolutely terrifying, that even man . . . will be appalled, and so abandon war forever,” predicted the engineer and inventor Thomas Edison a century ago.
So far, even the development of nuclear weapons — which are governed by a dedicated arms control strategy — has failed to deliver Edison’s prophecy. But we are now on the cusp of a new era: warfare powered by artificial intelligence, in which combatants have the means to deploy fully autonomous killing machines in battle. The realisation of weaponised AI has not yet prompted any consensus on how to constrain its use. Instead, China has embarked on an arms race to develop AI-controlled weapons, in which the US and its allies are now determined to compete. Russia’s president Vladimir Putin has made his own intentions clear by declaring: “whoever leads in this sphere will rule the world”.
Seeking to make sense of what this all means, I, Warbot, by the Kings College academic Kenneth Payne, examines the benefits, dangers and limitations of artificially intelligent conflict. At the same time, another trend is coming into play. The Weaponisation of Everything by Russia expert Mark Galeotti neatly illustrates the migration of war into more shadowy theatres of cyber, disinformation and organised crime.
While these two books consider contrasting scenarios for future conflict, both are exceptionally timely. In last year’s BBC Reith Lectures, computer scientist Stuart Russell made grave warnings about AI weapons and told the Financial Times they pose a “threat to humanity”. Meanwhile, as Nato allies gird themselves for a possible Russian invasion of Ukraine, Moscow is already accused of using multiple means to destabilise and intimidate Kyiv in the prelude to, or perhaps even instead of, military engagement.
Galeotti’s field guide is an admirably clear overview (in his words, “quick and opinionated”) of a form of conflict which is vague and hard to grasp. Variously described as hybrid, sub-threshold or grey-zone warfare, this is the no man’s land between peaceful relations and formal combat. Here, adversaries use cyber, disinformation, organised crime and the tools of “Lawfare” to wear down their targets subtly, insidiously and often deniably. It is a strange democratisation of warfare in which proxies such as the Kremlin-linked Wagner Group private security firm, or mercenaries in the form of hackers-for-hire can be deployed by governments, terrorist groups or even companies. The result is a constant state of low-level conflict that challenges normal definitions, since the opening of hostilities is never formally declared, the enemy can hide behind a false identity, and victory is rarely clear-cut.
One of Galeotti’s most enlightening arguments is the connection between grey zone conflict and globalisation. “It used to be the orthodoxy that interdependence stopped wars,” he writes. “In a way, it did — but the pressures that led to wars never went away, so instead interdependence became the new battleground.” It is hard to think of a better example of this than concerns expressed by the US and other Nato allies over Berlin’s energy dependence on Moscow, epitomised by the Nord Stream 2 gas pipeline. Similarly, speculation that the US could punish Russia for military action in Ukraine by suspending it from the Swift banking system demonstrates how globalised finance structures can be used as leverage against adversaries.
Of course, the concept of sub-threshold aggression, or political warfare, is not new. As Sun Tzu, the Chinese philosopher-general, wrote 2,500 years ago: “the supreme art of war is to subdue the enemy without fighting”. But the proliferation of social media, and the era of high-tech weaponry, have hugely widened the scope of such operations. As Galeotti points out, in March 2020, China’s info-warriors started the rumour that Covid-19 was a US bioweapon, which was picked up by conspiracy theorists worldwide. Ukrainians fighting Russian troops in the Donbas region in 2016 started being targeted with text messages apparently from fellow soldiers, saying “nobody needs your kids to become orphans” and urging them to abandon their posts. These were sent by Moscow’s drone-based Leer-3 electronic warfare system, which is able to hijack up to 2,000 mobile connections at once.
If Galeotti disputes that his vision of permanent, low-level conflict is dystopian (“I would certainly rather be targeted by disconcerting memes than nuclear missiles”), Payne’s book is noticeably short on optimism. While he makes none of the apocalyptic claims for weaponised AI that Stuart Russell does, he is clear-sighted about its potential. Algorithms have already proved their dominion over humans in sophisticated psychological games such as chess and poker. Warbots, he writes, will be extremely efficient killers — “accurate and relentless”. Fully autonomous lethal weapons (which can select their own targets and fire independently) already exist in the field and even if they haven’t killed anyone yet, he says, “it is just a matter of time before they do”.
However, these weapons have critical limitations. Payne’s central argument is that AI combat systems may be tactically brilliant, but they will always be strategically weak because they lack the essential human factor. War involves deciding how and when to fight, and second-guessing your foe’s ultimate aims. “There’s no recipe book for success that you can train your [algorithm] on,” Payne writes. “The cognitive challenge of war is far more complex than the cognitive challenge of battle”.
One clear conclusion from both books is that traditional defence platforms such as aircraft carriers are increasingly redundant in an era when warfare is either far below the radar or accelerating into the realm of AI. Another is that we lack the rules and ethical structures to police these evolving modes of conflict. Galeotti argues that while it is tempting to downplay the power of international institutions such as Interpol, the ICJ and the WHO, they are needed now more than ever to adjudicate fractious relations between states.
Payne, on the other hand, is sceptical that it will be possible to ban or restrict the use of AI weapons. As he points out, nuclear proliferation is easier to monitor because there tends to be a “trail of suspicious activity to follow”, such as rocket tests, underground explosions, and conspicuous enrichment sites. None of these factors hold true for AI weapons. States consider warbots far too useful to consider outlawing them, he writes, and in any case, regulation is almost impossible when there is still so much uncertainty about what this technology can actually do. He suggests the only option is for enlightened countries to “craft our own rules” — a somewhat underwhelming proposal, given what is at stake.
But Payne is certainly right that we, as societies, must consider whether we are ready to accept the loss of final control involved in what he calls “abrogating the ultimate [battlefield] decision to an amoral agent”. In this context, the only prospect of man abandoning war, as Edison predicted, will be so he can delegate it to a machine.
The Weaponisation of Everything: A Field Guide to the New Way of War by Mark Galeotti, Yale £20/$26, 248 pages
I, Warbot: The Dawn of Artificially Intelligent Conflict by Kenneth Payne, Hurst £20, 336 pages/Oxford University Press $29.95, 280 pages
Helen Warrell is the FT’s former Defence and Security Editor