In 2012, Human Rights Watch (HRW) issued a call for a global ban on autonomous weapons. A new NGO, the Campaign to Stop Killer Robots (CSKR) was formed in October 2012 to promote such a ban. In 2015, the Future of Life Institute (FLI) issued a new call for a ban, though now restricted to offensive autonomous weapons. The FLI proposal garnered the support of tens of thousands of signatories, including such prominent figures as Elon Musk and Stephen Hawking, and generated considerable attention in the international press and on social media. Meanwhile, the CSKR helped to organize “informal meetings of experts” starting in 2014 in Geneva under the auspices of the UN’s Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW) for the purpose of exploring the possibility of adding an autonomous weapons ban to existing bans on land mines and blinding lasers, among other banned or restricted weapons. In 2017 these sessions were elevated to the level of annual and still ongoing meetings of a formally constituted Group of Governmental Experts (GGE). Against the background of these developments on the international legal front, an extensive literature on the ethics and policy of autonomous weapons has emerged and media attention to the debate has intensified. At least in the public arena, momentum seems to be building for some kind of ban. Is a ban the right way to go? I think not.
Howard, Don (2022) In Defense of (Virtuous) Autonomous Weapons, Notre Dame Journal on Emerging Technologies 3 (2): 230–260.