People get tired of fighting; but do “machines”? How AI (Autonomous Weapons Systems) escalates wars.

Written by Gabriel Udoh,  PhD candidate in Autonomous Weapons Systems and International Humanitarian Law, Europa-Universität Viadrina, Frankfurt (Oder)

By now, most of the insinuations concerning the capacity of AI to cause harm, have gradually simmered down. The sudden emergence and popularity of large language models, coupled with the lack of adequate legislations, have made it easier to agree that there is need to regulate these innovations. Governments are giving in their best in ensuring that “we do not build things that destroy us in the end”. This sounds true for several areas, but is highly debatable in military applications of AI.

In my last post, I had mentioned that one of the negative effects of incorporating AI into lethal weapons systems, is the escalation of armed conflict itself. This is possible on many fronts; the fact that soldiers can fight “from the comfort of their homes”, a machine error can communicate wrong intentions, and even the extended damage lethal autonomous weapons systems are capable of causing in a short time when compared to human soldiers.

In what Paul Scharre calls “the consequences of removing the moral burden of killing”, there is the fear that “delegating the decision to kill, however, could increase the psychological distance from killing”, and relax the moral restraints that humans possess while shooting down others. One of the major reasons that wars come to an end is the fact that human soldiers get mentally drained due to many reasons; lack of weapons and will to continue fighting, the sight of too much deaths, pieces of information received during combat, physical weakness, and several others. These are factors that do not generally affect autonomous weapons systems. So, first, technologies that keep soldiers away from the battle field may sound like a life-saving innovation (I also think it is). However, we must realize that humans tend to feel less responsible for crimes committed “from a distance”.  Cummings also thinks that “physical and emotional distancing can be exacerbated by any automated system that provides a division between a user and his/her actions…these moral buffers, in effect, allow people to ethically distance themselves from their actions and diminish a sense of accountability and responsibility”. Keeping soldiers near their victims creates some level of moral accountability and restraint, and increasing this distance reduces this accountability, makes killing more palatable and escalates wars.

Stanislav Petrov is known to have saved the world from a nuclear catastrophe in 1983, when the semi-automated nuclear early warning radar of the USSR picked up signals of an impending nuclear attack from the US. This was later reported to be a miscalculation from the system, arising from a rare alignment of sunlight with some satellites, presenting a false situation. Petrov indeed saved the world, because he was “in-the-loop” of the system and the system was not fully autonomous. The presence of meaningful human control on the defense system, gave it a second level check and subsequent analysis, to reveal the fault, and prevent a counter-attack.  This would not be the only situation that errors have caused wars. After all, the first world war was a chain of events from the assassination of the Archduke Franz Ferdinand of Austria.[1] Accidents always occur, but autonomous weapons systems are just more prone to them. There are higher probabilities that machine errors would occur even at peace times, and send wrong messages of intending attacks to the other end, or in armed conflict situations where a cease fire has been declared, and an unmanned aerial system drops a bomb during negotiations.

In what Paul Scharre calls “the consequences of removing the moral burden of killing”, there is the fear that “delegating the decision to kill, however, could increase the psychological distance from killing”, and relax the moral restraints that humans possess while shooting down others.[2] One of the major reasons that wars come to an end is the fact that human soldiers get mentally drained due to many reasons; lack of weapons and will to continue fighting, the sight of too much deaths, pieces of information received during combat, physical weakness, and several others. These are factors that do not generally affect autonomous weapons systems. So, first, technologies that keep soldiers away from the battle field may sound like a life-saving innovation (I also think it is). However, we must realize that humans tend to feel less responsible for crimes committed “from a distance”. Cummings also thinks that “physical and emotional distancing can be exacerbated by any automated system that provides a division between a user and his/her actions…these moral buffers, in effect, allow people to ethically distance themselves from their actions and diminish a sense of accountability and responsibility”.[3] Keeping soldiers near their victims creates some level of moral accountability and restraint, and increasing this distance reduces this accountability, makes killing more palatable and escalates wars.

The ongoing conflict in Ukraine appears to be the most modern situation of armed conflict between “heavily armed” nations. Much damage is done, especially in Ukraine, given the superior technological capacity of the Russian forces. The use of Iranian-made Kamikaze drones have been reported. Also, the Turkish Bayrakter (nicknamed ‘the Toyota-Corolla of drones’) have been reportedly deployed.[4] These systems are capable of causing long term and widespread damages with more precision and less spending. Also, unmanned vehicles leave mental disruptions in communities where they have been used. A Stanford-NYU report indicates that Pakistani civilian populations are traumatized by the long drawn American drone wars.[5] Apart from dropping explosives capable of wiping out large communities, we may also have to contend with post-traumatic stress disorder on those who eventually survive the destruction caused by lethal autonomous weapons systems.

There are several other necessary considerations. For instance, autonomous weapons systems are cheaper to get and easier to make. A report narrates how the Ukrainians are using drones made “out of paper”.[6] It has become gradually easier to even buy some off the internet. This also makes it more attractive to fight, when “everyone” has personalized weapons. I intend to discuss weapons proliferation in another article.

Technological developments in armed conflicts have led to the gradual introduction of autonomy into weapons systems. While this also shows promise of advantages, it is important to imagine and prevent the probabilities that these systems are prone to run out of control, make mistakes, cause severe harm, and destroy any line of responsibility and accountability for their activities.

[1] Scharre, P., Army of None: Autonomous Weapons and the Future of War (W.W. Norton & Company, New York, 2019). Page 207

[2] Ibid

[3]  Cummings, Missy. (2004). Creating moral buffers in weapon control interface design. IEEE Technol. Soc. Mag. 23(3), 28-33, 41. Technology and Society Magazine, IEEE. 23. 28 – 33, 41. 10.1109/MTAS.2004.1337888.

[4] Drones over Ukraine: What the war means for the future of remotely piloted aircraft in combat. The Conversation.

[5]  James Cavallaro, Stephan Sonnenberg, and Sarah Knuckey, Living Under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan, Stanford: International Human Rights and Conflict Resolution Clinic, Stanford Law School; New York: NYU School of Law, Global Justice Clinic, 2012.

[6] Ukraine using ingenious tiny drones made of cardboard to bombard invading Russians