15 March, 2017
At a recent workshop on the ethics of artificial intelligence in Washington, DC, I was asked to give a short ‘conversation starter’ talk about the ethics of autonomous weapon systems. Unfortunately for the participants, I had no idea about autonomous weapons, nor the ethical issues involved. In this blog, I’ll summarize what I learned before I gave this talk, and give a sense of how participants responded. In this discussion, I mainly focused on the ethics underpinning regulatory action in this area.
Imagine a world where there are no regulatory or legislative initiatives to curb the development of autonomous weapons. Flying drones capable of causing harm or kill people would replace human combatants where possible, in order to save soldiers’ lives who would not necessarily need to go to a battle field anymore. The likelihood of warfare could thus increase, as it would be considered more ‘clean’. Further, it may become unwise to communicate through a human operator controlling a drone in battle, since these communications can be intercepted, manipulated, or blocked. Without human control, engineers may optimize the efficiency in terms of attack as well as strategy. As a predictable strategy would be easy to counter, unpredictability through random functions can become a virtue. Attack may thus become the best defense, leading to an arms race in unpredictable, efficient, and flying killing machines that lack empathy or human judgement in a given situation or context.
The costs and acquisition barriers of autonomous weapon systems are lower compared to other lethal weapons that cause harm without direct human combatants (e.g. land mines, chemical weapons, etc.). Even if there was any regulation, an inevitably emerging black market would allow for different malevolent groups to attain autonomous weapon systems for targeted assassinations or terrorist ends. Furthermore, international humanitarian law can be ignored; breaches can be politically justified. It would therefore not be certain that regulation would curb the use of these weapons. Repressive dictators could even use these weapons to efficiently weed out and neutralize potential troublemakers, without the fear of their own troops turning against the regime after reconsideration. Given this dystopian, though somewhat plausible trajectory for autonomous weapons systems development, it becomes necessary to think about their regulation today.
My first question was: what is the role and efficacy of prohibitive international laws in this space? Participants noted that fully autonomous weapons already lack the human qualities necessary to meet the principles of international humanitarian law, such as distinction, proportionality, and military necessity. These rules can be complex, require subjective decision making, and their observance typically entails human judgment and deliberation. However, new legislation banning such weapons can be effective and similarly to laws banning the use of landmines and chemical weapons that can also kill people without distinction of their status (ie. civilian/soldier) in a war.
Can professional and research engineering ethics play a role to mitigate a world infested with merciless killing machines? While considered a softer instrument than law, community standards for research disciplines or R&D labs do, to some extent, influence the trajectory of technology development. The aim of such documents is to ensure that engineers understand the social and political effects of their technical creations. Some research institutions and communities have worded their ethic commitments in the recently published Asilomar AI Principles. These state that “an arms race in lethal autonomous weapons should be avoided.” Such standards typically have naming and shaming as their enforcement method.
It remains to be seen which technology governance methods can curb a slide into a dystopian world. Under pressure from the Stop Killer Robots campaign, the United Nations Office of Disarmament Affairs has taken up formal talks with experts and country representatives to discuss the deliberations (see point 15 in this pdf). Likely, best practices will be promoted by professional organizations and enforced through international law that discourage weapons developers – ironically – by appealing to their human empathy.
Academic Liaison at Princeton University