Robots that Kill: The Case for Banning Lethal Autonomous Weapon Systems

0
8283

In the days leading up to its withdrawal from Afghanistan, the U.S. military conducted a drone strike that killed 10 civilians in Kabul. The timing of this tragedy, in the midst of the mass evacuation from Afghanistan, casts doubt on the U.S. military’s promise to stop serving as a global policeman. The Biden administration has not ended the “forever wars” — it has simply elected to fight them with robots in the sky rather than boots on the ground.

Pointing to the drone strike in Kabul as prime evidence, many experts warn of the dangers of Biden’s ‘over-the-horizon’ counterterrorism strategy, which uses imprecise semi-autonomous drones to replace human soldiers and combat terrorists from afar. Little attention, however, is being paid to an even more threatening weapon that may define the coming decades of war. Soon, guided missiles and semi-autonomous drones may be replaced by fully autonomous weapons that have the ultimate say over who lives and who dies. 

What Are Lethal Autonomous Weapon Systems?

Lethal autonomous weapon systems are being introduced into military arsenals, and the United States, Russia, South Korea, Israel and the United Kingdom have shown a keen interest in their development. Unlike semi-autonomous drones, LAWS can select targets and attack them without any human intervention.

These weapons are still in their infancy and, over time, will likely develop greater autonomy and more capabilities. One type of autonomous weapon would, after being activated by a human operator, fly around the world, identify its targets and fire missiles at them. An existing preliminary version of this weapon is the Israeli Harpy, which is programmed to roam around in the air in a predetermined loitering area, detecting and attacking enemy radar emitters. Political scientist Michael C. Horowitz posits in Dædalus that as technology progresses militaries may even use LAWS that serve as operations planning systems, autonomous battle systems that “could decide the probability of winning a war and whether to attack, plan an operation and then direct other systems — whether human or robotic — to engage in particular attacks.”

The appeal of LAWS to countries like the U.S. and Russia is quite intuitive. If a country can fight wars with ruthless efficiency, accurately pick out terrorists from hundreds of feet in the sky, and spare the lives of thousands of soldiers, why wouldn’t it do so? A closer inspection reveals that the costs of this technology vastly outweigh the benefits.

The Danger in Killer Robots

The use of LAWS would lower the threshold for states going to war, increasing the likelihood of conflict. Many philosophers, political scientists and governments have expressed the concern that militaries will resort to conflict more often if they do not need to rely on soldiers and can use LAWS instead. Domestic populations will be less wary of conflict if it no longer means seeing fellow citizens risk their lives on the battlefield. 

The threshold-lowering effect of LAWS is particularly relevant in the context of a current bipartisan trend in the U.S. against intervention. It is plausible that without LAWS, the era of U.S. unilateral interventions and the war on terror would come to an end. Recognizing the failures of wars in Vietnam, Iraq and Afghanistan, politicians on both sides of the political spectrum are pushing not to send troops abroad to risk their lives. But the option of using LAWS and sidestepping the costs to a country’s soldiers threatens to reverse this anti-war trend and provide militaries with a politically palatable way of fighting wars. There could be catastrophic consequences if we liberate militaries from political constraints preventing them from going to war. 

The first wave of the proliferation of LAWS may simply look like the natural progression of our current drone capabilities. For instance, Russia may have already used autonomous drones to attack targets in Syria, but these weapons are only different from current semi-autonomous drones in the greater degree of risk assumed by eliminating human intervention. In other instances, however, the use of LAWS will present substantial advantages that make them different in kind from drones as we know them. Consider, for example, Azerbaijan’s use of Israeli-supplied IAI Harop drones in the war with Armenia in 2020. The loitering munition system used by the military allowed tiny and hardly-detectable autonomous drones to circle over the enemy’s defense line, pick out targets and attack them, an ability that proved decisive in Azerbaijan’s victory in the war. 

To understand what a world with LAWS will look like in the long term requires a bit of imagination. Perhaps a post-withdrawal Afghanistan will involve weapons like the Harop drones constantly roaming the skies and diving into the ground to take out targets. Or maybe we will see the chilling predictions of science fiction come true. In their book AI 2041, writers Chen Qiufan and Kai-Fu Lee express their fear that LAWS will fall into the hands of armed groups and terrorists. They describe a “Unabomber-like scenario in which a terrorist carries out the targeted killing of business elites and high-profile individuals,” using autonomous drones that rely on facial recognition to identify their targets. Leading expert in artificial intelligence Toby Walsh warns of these weapons falling into the hands of dictators and being used as tools of ethnic cleansing.

Even if we assume that LAWS are operated primarily by legitimate militaries, additional complications arise when we consider what happens in the case of unjust killings. Philosopher Robert Sparrow argues that the autonomy of LAWS makes it impossible to hold anyone accountable for illegitimate killings they commit. If the robot acted autonomously, tracing accountability back to another agent seems morally objectionable and legally infeasible. But it would also be unjust to not punish illegitimate killings. This dilemma presents a so-called ‘responsibility gap’, where no one can be held responsible for illegitimate killings, and wrongful acts of war go undeterred.

Preventing The Next Arms Race

Despite these grave concerns, countries are pushing ahead in the research and development of LAWS. With large military powers leading the race, there are two potential outcomes if this trend goes uninterrupted. One is that LAWS become tools with which powerful militaries destabilize other regions, starting a new chapter of the ‘forever wars’ without boots on the ground. The second potential outcome is that LAWS become front and centre in conflict between the large military powers leading the race. They may drag us into a new war between superpowers without the mutually assured destruction that prevents nuclear warfare since LAWS can engage in a series of smaller, yet still extremely impactful, attacks that will not be deterred by the threat of retaliation.

The movement against LAWS is small, but it is growing. More and more countries have expressed concern about the destabilizing effects of these weapons and stressed the need for a collective agreement to rule them out, much like existing treaties that limit chemical, biological and intermediate-range nuclear weapons. However, military powers like the U.S. and Russia have blocked regulations on LAWS at the Convention on Conventional Weapons and are quietly leading what some are calling the third revolution in warfare. The challenge in regulating or banning LAWS, as with many forms of international cooperation, is overcoming collective action problems. The development of LAWS seems like a textbook example of a “security dilemma,” wherein one country perceives heightened security measures by another as a threat and decides to adopt similar measures in response. Together, these factors increase the risk of escalation to an outcome neither party desires. Our best hope in confronting this dilemma is to foster discussions in international negotiations that expose to military superpowers the great risks that LAWS present. While many countries may fear falling behind if they make the first move to disarm and de-escalate, it is possible that when the stakes are sufficiently high and it is clear that nobody, including dominant powers, is immune to the dangers of LAWS, we may see sufficient international will to address them. 

While LAWS still appear to be in their infancy, we are running out of time to prevent their uncontrolled proliferation. Once one country uses these weapons to significantly tilt the playing field in its favor, others may have no choice but to follow suit. It is therefore imperative that we switch off the robots before they take over the battlefield and the horrors of science fiction become reality.  

Image Credit: “Missile” by Louis Mornaud is licensed under Unsplash License