Tools of War and Virtue: On the Effects of AI on Human Ethical Expertise in the Military

Led by Sigurd Hovd

Aug 2020 – Oct 2024

Tools of War and Virtue explores how AI may affect human military professionals’ ability to exercise their professional ethical expertise.

Tools of War and Virtue explores how AI may affect human military professionals’ ability to exercise their professional ethical expertise. It does so by highlighting the ways in which professional military ethics, as a specific ethical practice and expertise depends on features of the institutional environment in which it is embedded. Moral reasoning in the military consists in deliberation about a collective endeavor through institutional roles. To act morally one is dependent on these roles to be correctly aligned with the common good for which the institution exists. A risk in implementing AI within the military is for these institutional roles to be misaligned through the extensive institutional adaptation its implementation will require.

On a more fundamental level the thesis should be read as a reappraisal of the metaphysics of moral agency in the advent of a technological revolution. It constitutes an intervention against a set of individualistic presuppositions about our capacity for moral agency that it argues can cloud both how we view the risks associated human-AI interaction, and the possible capabilities of AI systems. It provides a new perspective on the military institution, with its respective professional roles, as a cognitive tool through which we can enact a specific kind of moral practice.

Tools of War and Virtue consists of 4 academic articles with one central theme: the dependency of moral agency on sociality and its implications for the ethics of AI. Two articles explore this theme by looking at professional military ethics. One through the concept of ethical deskilling, one through the concept of technological disruption. Two articles explore key theoretical questions raised by the anti-individualistic approach of the thesis’ main question. One explores its implications for how we should think of the prospect of artificial moral agents. The other provides a deeper meta-normative foundation to the overarching approach of the thesis.

Adviser at PRIO: Greg Reichberg Adviser at UiO: Sebastian Watzl

An error has occurred. This application may no longer respond until reloaded. An unhandled exception has occurred. See browser dev tools for details. Reload 🗙