The Future of War

A.C. Grayling—

The history of drones is surprisingly long, as a special form of ‘unmanned aerial vehicle’ (UAV) long since developed to undertake tasks considered ‘too dull, dirty or dangerous’ for human beings. UAVs were in rudimentary use before the First World War for target practice, they served as flying bombs in the First and Second World Wars, they were used as decoys and surveillance devices in the Arab–Israeli Yom Kippur war of 1973, and in Vietnam they undertook more than 3,000 reconnaissance missions. But after 2001 military UAVs increasingly became central to US operations in the Middle East and Afghanistan, and in hunter-killer roles. The Predator drone became operational in 2005, the Reaper in 2007; since then they have grown in number to constitute almost a third of US aircraft strength, and have been used in many thousands of missions against targets across those regions. Hunter- killer drones over Afghanistan are remotely operated from air force bases in the United States, such as Creech AFB near Las Vegas. The personnel selected to operate them are generally the kind of young men who are good at such video war games as Call of Duty and Combat Mission. In the terminology of remote warfare, drones are described as ‘human-in-the-loop’ weapons, that is, devices controlled by humans who select targets and decide whether to attack them. Another development is the field of ‘human-on-the-loop’ systems, which are capable of selecting and attacking targets autonomously, though with human oversight and ability to override them. The technology causing most concern is ‘human-out-of-the-loop’ systems, which are completely autonomous devices on land, under the sea or in the air, programmed to seek, identify and attack targets without any human oversight after the initial programming.

The more general term used to designate all such systems is ‘robotic weapons’, and for the third kind ‘lethal autonomous robots’, ‘lethal autonomous weapons’ (LAWs) orcolloquially and generally‘killer robots’. The acronym ‘LAWs’ is chillingly ironic. Expert opinion has it that they could be in operational service before the middle of the twenty-first century. It is obvious what kind of concerns they raise. The idea of delegating lifeanddeath decisions to unsupervised armed machines is inconsistent with humanitarian law, given the danger that they would put everyone and everything at risk in their field of operation, including non- combatants. Anticipating the dangers and seeking to preempt them by banning LAWs in advance is the urgently preferred option of human rights activists.

It was noted in the last chapter that international humanitarian law already has provisions that outlaw the deployment of weapons and tactics that could be particularly injurious, especially to non-combatants. LAWs are not mentioned in the founding documents, of course, but the implication of the appended agreements and supplementary conventions is clear enough. They provide that novel weapons systems, or modifications of existing ones, should be examined for their consistency with the tenor of humanitarian law. One of the immediate problems with LAWs is whether they could be programmed to conform to the principle of discrimination, that is, to be able to distinguish between justified military targets and everything else. Could they be programmed to make a fine judgement about whether it is necessary for them to deploy their weapons? If so, could they be programmed to adjust their activity so that it is proportional to the circumstances they find themselves in? Distinction, necessity and proportionality are key principles in the humanitarian law of conflict, and in each case flexible, nuanced, experienced judgement is at a premium. Could a computer program instantiate the capacity for such judgement?

An affirmative answer to these ‘could’ questions requires artificial intelligence to be developed to a point where analysis of battlefield situations and decisions about how to respond to them is not merely algorithmic but has the quality of evaluation that, in human beings, turns on affective considerations. What this means is best explained by considering neurologist Antonio Damasio’s argument that if a purely logical individual such as Star Trek’s Mr. Spock really existed, he would be a poor reasoner, because to be a good reasoner one needs an emotional dimension to thought. A machine would need extremely subtle programming to make decisions in the way human beings do, and humanitarian considerations are premised on the best possibilities of the way human beings make decisions about how to act. In particular, creating a machine analogue of compassion would be a remarkable achievement; but a capacity for compassion is one of the features that intelligent application of humanitarian principles requires.

An answer to this is that the human emotional dimensions invoked are just what should not be required on the battlefield. Machines, says this answer, would be less erratic because never emotionally conflicted, and swifter and more decisive in action, than most if not all humans.

This is true. But the question is whether we wish the decision-maker in a battle zone to be this way, given that among the necessary conditions for conforming to humanitarian law is the capacity to read intentions, disambiguate and interpret odd behavior, read body language and the like. These are psychological skills that humans usually develop early in life, and which they apply in mainly unconscious ways. Would a killer robot be able to tell the difference between a terrified individual trying to surrender and an aggressive individual about to attack? Grasping what a person intends or desires by interpreting their actions is a distinctive human skill. To program killer robots with such capacities would be yet another remarkable achievement.

And who would be held accountable if a LAW went haywire and slaughtered children in an orphanage, demolished hospitals full of sick and wounded, killed everyone it encountered irrespective of who or what they were and what they were doing? Would it be the military’s most senior commanders? The programmers? The manufacturers? The government of the state using them?

From War: An Enquiry by A.C. Grayling. Published by Yale University Press in 2018. Reproduced with permission. 


A.C. Grayling is master, New College of the Humanities, and supernumerary fellow, St. Anne’s College, Oxford. He is active in the field of human rights and conflict intervention and has written or edited more than thirty books.


Further Reading

Featured Image: via Wikimedia Commons

Recent Posts

All Blogs

Categories