Biased artificial intelligence systems and their implications in war scenarios

Book chapter

Fisher, Kelly (2024) Biased artificial intelligence systems and their implications in war scenarios, in Artificial Intelligence, Ethics, and the Future of Warfare: Global Perspectives. India: Routledge (12–1). DOI: 10.4324/9781003421849.

Publisher's web site

In recent years, there has been a growing debate about ethical uses of artificial intelligence (AI) in warfare. However, few of these debates and topics have focused on the issue of biased AI, despite the fact that the topic of biased AI has gained much prominence when it comes to the use of AI in civilian settings. This chapter aims to bring together these two conversations of ethical use of AI in military settings and the issue of biased AI in civilian settings. By drawing upon examples of biased AI in civilian settings, I highlight what implications these examples may have in military settings. This includes how assumptions about gender, ethnicity, and other factors may result in biased algorithms that select military targets, and how insufficient training data may result in inaccurate and biased facial recognition that disproportionality impacts certain groups in society. In the conclusion, I argue that there needs to be more focus on biased AI in the broader discussions of AI, ethics, and warfare.

An error has occurred. This application may no longer respond until reloaded. An unhandled exception has occurred. See browser dev tools for details. Reload 🗙