There has been a steep increase in the reliance on AI tools for warfighting. AI has the potential to enable faster and more informed decision-making on the battlefield: better surveillance, tracking, target identification, and validation; as well as more data-driven support for considering different courses of action and selecting means to pursue them. The increased reliance on AI has also fueled a push for weapon systems to perform more tasks without continuous human intervention.
These changes to the way wars are fought have rightly led to a range of ethical worries around the use of AI for life-or-death decisions on the battlefield. These concerns include issues relating to safety and accuracy, transparency, explainability, robustness and brittleness, and assignment of responsibility. The focus of this project is to address how best to deal with the ethical risks that emerge from increased reliance on AI in warfighting in general and targeting in particular.
Meaningful progress in regulating and assuring the safety of AI-enabled weapons should begin with an appreciation that a weapon that relies on AI has a complex lifecycle comprised of the operation of two systems. The first system culminates in fielding a weapon – that is, making it available to the military for potential use. The goal of this system is to deliver a weapon that is capable of being used ethically under the anticipated conditions in which it will be used. The second system involves a targeting process that culminates in the use of a weapon. The goal of this system is to ensure that a weapon is used ethically once it is fielded.
Within each of those systems humans and AI interact in performing various tasks. The central question for ERM is how to identify and mitigate key risks that emerge from the range of human-machine interactions that constitute these two systems.
ERM will engage stakeholders across industry, government, and international organizations to better understand the tasks in the fielding and targeting systems and the ethical salience of the human-machine interactions in them. Mapping the development and use of AI-enabled weapons in this way will sharpen dialogue on a range of key questions about the governance of AI weapons, and will provide a framework for devising ethical risk assessment tools and ethical risk management strategies for industry and regulators.
ERM collaborates with the Center on National Security at Georgetown University's Law Center. Its collaborating partners are the Norwegian Ministry of Defense, the Norwegian Ministry of Foreign Affairs, and the Norwegian Red Cross.
The project is funded by a grant from the Research Council of Norway.