Ethical Risk Management for AI-Enabled Weapons: A Systems Approach (ERM)

Jan 2025 – Jun 2028

Photo: US Department of Defense Sgt Cory D. Payne
Photo: US Department of Defense Sgt Cory D. Payne
This interdisciplinary project engages with defense practitioners and policymakers to develop theory-grounded and actionable risk assessment and mitigation strategies for AI-enabled weapons. ERM is led by Jovana Davidovic and Greg Reichberg

There has been a steep increase in the reliance on AI tools for warfighting. AI has the potential to enable faster and more informed decision-making on the battlefield: better surveillance, tracking, target identification, and validation, as well as more data-driven support for considering different courses of action and selecting means to pursue them.

These changes to the way wars are fought have rightly led to a range of ethical worries around the use of AI for life-or-death decisions on the battlefield.  These concerns include issues relating to safety and accuracy, transparency, explainability, robustness and brittleness, and assignment of responsibility.

ERM proceeds from the assumption that the right approach to identifying and mitigating ethical risks is by considering the entire targeting process, as well as the complete lifecycle of algorithms and AI tools that play a role in that targeting process.

ERM will develop actionable recommendations for mitigating the ethical risks that emerge from increased reliance on AI for warfighting.

ERM engages with stakeholders across industry, government, and international organizations to better understand the tasks in each part of the targeting process, including the various common algorithm types that are utilized in different moments of the targeting process. The project focuses on human-machine interactions in the system and considers how such interactions can be leveraged to mitigate key ethical risks. Mapping the development and use of AI-enabled weapons in this way will sharpen dialogue on a range of key questions about the governance of AI weapons and provide a framework for devising ethical risk assessment tools and ethical risk management strategies for industry and regulators.

Outcomes

  1. A map of the governance landscape for AI-enabled weapons: Mapping the AI-tools for the targeting process and describing the lifecycles of common AI tools will define a clear landscape for policy and academic conversations.
  2. Clarity with respect to key terms: Academic papers and memos that will provide clarity around key terms and their uses for purposes of clear future deliberations around AI weapons governance.
  3. Ethical risk assessment tools: Risk assessment tools for industry, defense contractors, defense department procurement teams, and investors, which are grounded in real world understanding of how warfighting AI tools and weapons are developed, tested, evaluated, procured, fielded, and used.
  4. Guidance for policymakers: The outcomes under 2. and 3. will provide some guidance for policymakers. The risk assessment tools that can ground auditability and thus the ability to assure compliance and the agreement and clarity around key terms will contribute to the ability of proposed policies to guide actions.

ERM's institutional research partner is the Center on National Security at Georgetown University's Law Center. Its collaborating partners are the Norwegian Ministry of Defense, the Norwegian Ministry of Foreign Affairs, and the Norwegian Red Cross.

An error has occurred. This application may no longer respond until reloaded. An unhandled exception has occurred. See browser dev tools for details. Reload 🗙