AI-enabled technologies can have two different functions in battlefield settings. On the one hand, by collecting data and issuing targeting recommendations, machines will provide support for human decision-making; inversely, decision-making can be delegated to machines, enabling them to engage with targets without the direct intervention of human operators. Regarding the first function, we show how machine-issued target-identification and target-engagement recommendations, despite the notable efficiency gains, also involve possible ethical downsides. These are discussed in terms of the ‘overfitting problem,’ ‘classification problem,’ ‘information overload,’ ‘automation bias’ and ‘automation complacency.’ Regarding the second function – autonomous weapon systems (AWS) – we survey the ethical arguments that have been proposed for and against their deployment on the battlefield. Afterwards, distinguishing the deliberate misuse of AI systems from problems associated with accidents and safety, we explain how precautions, including rigorous testing, must be introduced early on when new systems are designed. Later, during military training, feedback loops will ensure that systems can be appropriately modified vis-à-vis user experience. The interaction of designers, autonomous technologies and end-users can fruitfully be assessed under the rubric of ‘virtue ethics,’ as is shown in the chapter's concluding section.
Reichberg, Gregory M. & Henrik Syse (2024) Ethical analysis of AI-based systems for military use, in Artificial Intelligence, Ethics and the Future of Warfare. India: Routledge (33–1). DOI: 10.4324/9781003421849.