Autonomous weapons systems: how to work towards a total ban?

  • September 13, 2019
  • Christiane Saad and Ewa Gosal

1. Introduction

The potential uses of artificial intelligence are extensive, ranging from finance, health care, justice, education to military uses. While the application of AI has been widely acclaimed in non-military technologies, its use by the military has been the subject of intense debate. In particular, it is the ethical and legal implications of Autonomous Weapon Systems, also referred to as Lethal Autonomous Weapon Systems, which has caused great controversy.

Approaches to the development and use of regulation concerning AWS vary depending on the state.  While some states believe new legislation is required, others prefer less stringent political measures and guidelines.1 The debate ranges from a complete ban on their development, production and use (because of their inability to apply human judgment and compliance with the law), to their benefits of being more precise and their potential to reduce casualties during conflicts.

It should be highlighted that there is no internationally agreed formal definition for AWS or LAWS. For the purposes of this paper, the term AWS will be used and is defined as:

Any weapon system with autonomy in its critical functions — that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.”2

We provide an overview of the main issues debated with regards to AWS and international law, and note ways forward for regulating AWS.

2. AWS and its legal and ethical implications

2.1 From artificial intelligence to lethal autonomous weapons

AI’s definition shifts depending on the entity providing it and the goals they are trying to achieve with an AI system.3 AI makes everything scalable, where a single system can automate millions of devices. The majority of the AI fielded today is concerned with what is called “narrow AI”, which involves creating programs that demonstrate intelligence in one or another specialized area, such as chess-playing, medical diagnosis or automobile driving.

Artificial general intelligence (AGI) is the construction of a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions.  Like all major scientific discoveries and technical breakthroughs, AGI has the potential to revolutionize our life and even the fate of the human species, either in a desired way or an undesired way.4  AGI is expected to successfully perform any intellectual task that a human being can, including reasoning, solving problems, making judgments when uncertain, and even being innovative and creative.

AI is leading towards a new algorithmic warfare battlefield that has no boundaries or borders, may or may not have humans involved, and will be impossible to understand and perhaps control across the human ecosystem in cyberspace, geospace (near earth outer space) and space.5 This is why it is time to align these new developments with existing laws and regulations that currently guide what is considered to be morally acceptable warfare.

2.2 AWS and autonomy

The lack of a universally accepted definition regarding what constitutes AWS is often cited as a reason for not proceeding towards any kind of international governance over autonomous weapons.6  The main reason for this is the difficulty to select the threshold for a weapon to be considered autonomous. Their various capabilities, applications and levels of human interactions create a wide spectrum of autonomy. Generally, the definitions of autonomous weapon systems can be classified into three groups.7

The first ones are based on the kind of human involvement, with varying degrees of autonomy : 1) human-in-the-loop control: where semi-autonomous weapon systems require human operator selection and authorization to engage specific targets; 2) human-on-the-loop: human-supervised autonomous weapon systems, which allow human intervention and, if needed, termination of the engagement, with the exception of time-critical attacks on platforms or installations; and 3) human-out-of-the-loop weapons: autonomous weapon systems, which upon activation can select and engage targets without human intervention.8 Within each of these categories, there are many intermediate levels in the way human and machine decision-making may be designed.

The second category of definitions are based on capability parameters, such as understanding intent and direction.9

The third category are structured along legal lines and lay emphasis on the nature of tasks that the systems perform autonomously.10 This category includes the definition by the International Committee of Red Cross (ICRC) of those systems able to independently select and attack targets with autonomy in the “critical functions” of acquiring, tracking, selecting and attacking targets without human intervention.11]

The term autonomy is important for understanding debates about AWS. Autonomy results from the delegation of a decision to an authorized entity to take action within specific boundaries. An important distinction is that systems governed by prescriptive rules that permit no deviations are automated, but they are not autonomous. To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.12

2.3 Assessment of the legality of AWS

Although, the use of AI may reduce the collateral damage and harm of war by allowing better targeted decision-making, fully autonomous weapons would also create new moral, technical, and strategic dilemmas, which is why some scientists, activists and world governments call for its preemptive ban.13  

AI raises a number of ethical concerns, including issues of bias, unfairness, safety, lack of transparency14, and poor accountability. Moreover, automated weapons systems may not be predictable.15 With millions of lines of code in each application, it is difficult to know what values are inculcated in software and how algorithms actually reach decisions.16 The very same algorithm can serve a variety of purposes, which makes the ethics of decision-making very difficult. For example, automation algorithms can be found in their geo-location and driving systems, in the control of their sensors, actuators and weapons, in their health management, but also in targeting, deciding and during attacks.

It is one thing to support general goals, such as fairness and accountability, but another to apply those concepts in particular domains and under specific political conditions. One cannot isolate ethics discussions from the broader political climate in which technology is being deployed.

In the foreseeable future, a growing number of combat operations are expected to be carried out by AWS. This raises questions in terms of compatibility with international humanitarian law (IHL), such as the principles of distinction, proportionality and responsibility. For instance, AWS will find it very hard to determine who is a civilian and who is a combatant.17

Current versions of unmanned systems operate with direct human input and human operators make the majority of tactical decisions. However, future systems will not only follow pre-determined routes or hit a pre-programmed target, but will also operate in a manner that allows the systems to select and acquire a target, choose a route to reach the target area, decide whether to deploy weapons and, if so, decide which weapon system to deploy.18

Although new technologies of warfare are not specifically regulated by IHL treaties, their development and employment in armed conflict does not occur in a legal vacuum.19 These challenges may raise the question of whether existing international rules are sufficiently clear or whether there is a need to clarify them or develop new ones.

3. AWS and international law

AWS are not specifically regulated by international law. However, it is undisputed that international customary principles and rules apply to their use.

3.1 Jus ad bellum and the right to self-defence

Primarily, it is important to recall jus ad bellum which governs the use of force between states. Any resort to armed force on the territory of a foreign state, without its express consent violates Article 2(4) of the UN Charter (Charter).20

The jus ad bellum principle is counterbalanced by the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations, recognized under Article 51 of the Charter.21

Moreover, under customary international law, any resort to force in self-defence must comply with the conditions of necessity and proportionality. Necessity requires that force should only be used when other non-forcible measures are not effective, feasible or have been exhausted. The principle of proportionality requires that the state using force should respond in a manner that is proportional to the need to repel the threat.22

3.2 International humanitarian law

Furthermore, if AWS are deployed in armed conflicts they would need to be able to evaluate and make judgements that comply with IHL.23

One of the fundamental rules of IHL requires parties to an armed conflict to direct their operations only against combatants and military objectives.24

Additionally, its rules prohibit attacks on persons who are hors de combat and therefore vulnerable.25 Under customary international law being hors de combat is understood as being:

  • in the power of an adverse party
  • defenceless because of unconsciousness, shipwreck, wounds or sickness; or
  • being anyone who clearly expresses an intention to surrender.26

3.3 International law of law enforcement

Finally, a number of criminal justice instruments may also apply to the use of AWS in law enforcement,27 including the 1979 Code of Conduct for Law Enforcement Officials,28 and the 1990 Basic Principles on the Use of Force and Firearms by Law Enforcement Officials.29

The international law of law enforcement defines when use of force by a state’s law enforcement officials is lawful and its conditions are as follows:

  • in self-defence;
  • to prevent crime;
  • to effect or assist in the lawful arrest of offenders or suspected offenders;
  • to prevent the escape of offenders or suspected offenders; and
  • to maintain public order and security.30

3.4 Convention on Conventional Weapons

The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, commonly referred to as Convention on Conventional Weapons (CCW)31 prohibits use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately. While the CCW does not expressly mention AI and automated systems, its general principles still apply to AWS.

The CCW was adopted in 1980 and Canada was among the first group of countries to sign it in 1981. The CCW is important not only because it may apply to AWS, but also because it provides technical, legal and political experts a forum to exchange information, agree on definitions and to have a dialogue on new issues of concern with conventional weapons use and development, including AWS.32

3.5 Group of Governmental Experts and Canada’s perspective

In 2016, at the occasion of the Fifth Review Conference to the CCW, Canada and other High Contracting Parties established an open-ended Group of Governmental Experts (GGE) on AWS.33

In March 2016, 14 countries called for a ban on the development of AWS, however, Canada was not among them.34 In its opening statement, Canada reiterated that while it continues to believe that IHL is sufficiently robust to regulate emerging technologies, it also recognizes that AWS may raise unique challenges with regards to the weapons review process, such as those related to testing and evaluation and in general of ensuring the lawful use of AWS.35

In November 2017, over 200 leaders in AI from across Canada signed an open letter to Prime Minister Justin Trudeau urging the government to “take a strong and leading position against AWS on the international stage.”36 Subsequently, at the 2017 GGE meeting, Canada stated that it has “committed to maintaining appropriate human involvement in use of military capabilities that can exert lethal force.”37 In March 2019, the GGE met to discuss developments and strategies of AWS. Canada did not issue any further statements regarding AWS.

Canada’s current position is that work to restrict or ban weapon systems prone to indiscriminate effects or which are excessively injurious is critical.38 The belief that AWS should be banned is not shared by the Canadian government, as is not interested in banning AWS technology. For example, in Canada’s recent Defence Policy Document “Strong, Secure, Engaged”, the Government of Canada has committed to “maintain appropriate human involvement in the use of military capabilities that can exert lethal force”39, while pointing out that technological developments is the future of defence, which is “expected to be vastly different than today, with a greater emphasis on information technologies, data analytics, deep learning, autonomous systems”.40 Alarmingly, nowhere is “appropriate human involvement” determined or defined in the document, nor does it openly support the idea that meaningful human control is critical for AWS.

The second part of the 2019 GGE session will take place from 20 to 21 August 2019. It still remains to be seen if Canada will take any formal positions regarding AWS. As the future of AWS is still being decided, it is incumbent on all of us to ensure that the choices made and the actions taken are done so in the most morally acceptable way possible.

4. Pathways to possible solutions

While the military use of AI development seems unavoidable and a complete ban on the use of AI does not seem to be the answer, with the rapid advancements of autonomous technologies and more importantly the positions held by a number of states, some clear strategies for regulating AWS in the context of armed conflicts need to be developed (including those for a complete ban on AWS).

4.1 State action for a ban

The history of negotiation and ratification of the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, known informally as the Ottawa Treaty41 could be an example on how an initiative to ban a weapon ban with little international support may be fast-tracked to become a widely recognized international treaty.

The First Review Conference of the 1980 Convention on Conventional Weapons closed in Geneva on 3 May 1996 and introduced a number of changes that were widely welcomed, but fell far short of totally prohibiting these weapons. Canada’s delegation announced that Canada would host a meeting of pro-ban States to develop a strategy to move the international community towards a global ban on anti-personnel mines. The conference, held in October 1996 in Ottawa, set the scene for what would become known as the “Ottawa process” — a fast-track negotiation of a convention banning antipersonnel mines. Only 14 months later, representatives of 121 governments signed the Ottawa Treaty. At the end of April 1998, there were a total of 124 signatories and 11 States had already ratified the Convention that entered into force six months after 40 States had formally adhered to it.42 Today, there are 133 signatories and 164 Parties to the Ottawa Treaty.

Similarly, the process towards a complete ban of AWS could start with legal reviews at the national level, then an international declaration introducing a non-binding code of best practices and then preparing the ground for a fast negotiation and conclusion of a convention banning AWS.

4.1.1 National code of best practices: AWS legal review

Therefore, the first step for states would be to develop sets of guiding principles for the safe, ethical and responsible use of AWS at a national level.

There are several propositions on how this solution could be approached. Probably the most comprehensive is the one that Michael W. Meier discussed in the form of five questions that could help states considering AWS development in the context of compliance with the principles and rules of IHL:

  • Whether there is a specific rule, such as a treaty obligation or a customary rule, prohibiting or restricting the use of the weapon;
  • Whether, in its normal or intended circumstances of use, the weapon is of a nature to cause superfluous injury or unnecessary suffering;
  • Whether the weapon is inherently indiscriminate, i.e., whether the weapon is capable of being used in compliance with the rule of discrimination (or distinction);
  • Whether the weapon is intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment; and
  • Whether there are any likely future developments in the law of armed conflict that may be expected to affect the weapon subject to review. 43

Based on these five questions, policy makers would need to coordinate with their respective governments to ensure that any such systems are consistent with national policies and to consider developments at the CCW.44

This solution would have a considerable advantage because it would be based on existing principles of international and humanitarian laws and would leave the flexibility of defining best practices to states, thereby, allowing some states to move ahead of those harbouring doubts about regulating AWS and create a norm that is eventually adopted by the majority of states.

4.1.2 Political declaration

The next step would be the adoption of a political declaration, such as a UN resolution, affirming the principle of human control over weapons of war, accompanied by a non-binding code of conduct. Such a measure would require engagement and restraint of all declaring parties at all times to ensure compliance with international law.45

4.1.3 International ban of AWS

The final solution would be the adoption of a legally binding international ban on the development, deployment and use of AWS. Such a ban could come in the form of a new CCW protocol, a tool used to address weapon types not envisioned in the original treaty46 or in a form of a separate convention.

4. 2 Corporate responsibility and code of ethics

While it is true that corporate social responsibility is no substitute for government regulation, it can be an important element in a larger regulatory system.47 Companies can make sure their products are sold by licensed and reputable dealers that conduct background checks or they could commit to selling military and law enforcement equipment only to government clients.48 Otherwise, should they be held responsible for the foreseeable consequences that flow from use of their products? This responsibility could include civil liability and, in cases involving war crimes and violations of human rights, responsibility under international human rights standards.

Other initiatives, primarily driven by employee protests at tech companies such as Amazon49 and Google might contribute in this direction. For example in 2018, over 3,000 Google employees signed an open letter50 in protest against the company’s involvement with a U.S. Department of Defense AI Maven project that studies imagery and could eventually be used to improve drone strikes in the battlefield. Google employees expressed concern that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. Google reaction was to stop its involvement in the project beyond its original contract. In addition, in its code of ethics, Google wrote that it will not design or deploy AI in weapons or other technologies designed to cause or directly facilitate injury to people; in technologies that gather or use information for surveillance violating internationally accepted norms; or technologies for any purpose that contravene widely accepted principles of international law and human rights.51

5. Conclusion

Ethical codes and guidelines are rarely backed by enforcement, oversight, or consequences for deviation. Ethical codes can only help close the AI accountability gap if they are truly built into the processes of AI development and are backed by enforceable mechanisms of responsibility that are accountable to the public interest.52 Thus, should embedding existing and future ethical and legal frameworks into AWS be part of the solution? It remains to be seen whether the IHL principles of distinction and proportionality can be encoded into digital formats.53

Some advocates of a ban on autonomous weapons systems seek to ban not merely production and deployment but also research, development, and testing of these machines. As history shows, a complete ban of AWS is simply impossible. Therefore, states must develop an understanding of and rules on which uses of autonomy are appropriate and which go too far and surrender human judgment during times of conflict.

These rules must preserve what we value about human decision-making, while attempting to improve on the many human failings in war.54 The term autonomy in the context of AWS should be understood and used in the restricted sense of the delegation of decision-making capabilities to a machine. Since different functions within AWS may be delegated to varying extents, and the consequences of such delegation depends on the ability of human controlling them. A human should still be responsible for programming the behavior of the autonomous system and its actions must be consistent with the laws and strategies provided by humans.

It remains to be seen if the second part of the 2019 GGE session will become an occasion for Canada and like-minded countries, accompanied by corporate actors, to engage in a process similar to the one leading to the Ottawa Treaty and resulting in an international ban of AWS use in military conflicts.

Christiane Saad is Lawyer and Managing Director of the Programme de pratique du droit at the University of Ottawa. Ewa Gosal is Avocate chargée de projet at Association des juristes d'expression française de l'Ontario.

 

1 United Nations News, “Autonomous weapons that kill must be banned, insists UN chief” (25 March 2019), online: United Nations <https://news.un.org/en/story/2019/03/1035381>.

2 Neil Davison, “A legal perspective: Autonomous weapon systems under international humanitarian law” (2017), United Nations Office for Disarmament Affairs Occasional Papers No 30, p 5.

3 See generally Bernard Marr, Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems, (Cornwall: John Wiley and Sons, 2019).

4 See generally Pei Wang, “Artificial General Intelligence — A gentle introduction”, online : <https://cis.temple.edu/~pwang/AGI-Intro.html>.

5 Pandya Jayshree, “The Weaponization Of Artificial Intelligence”, Forbes (14 January 2019), online: <https://www.forbes.com/sites/cognitiveworld/2019/01/14/the-weaponization-of-artificial-intelligence/#52783e036867>.

6 The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, “Reframing Autonomous Weapons Systems”, at 115 online: IEEE <https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_reframing_autonomous_weapons_v2.pdf> [IEEE].

7 Vincent Boulanin and Maaike Verbruggen, “Mapping the development of autonomy in weapon systems”,  (November 2017) at 8, online :  Stockholm International Peace Research Institute < https://www.sipri.org/publications/2017/other-publications/mapping-development-autonomy-weapon-systems>.

8 Docherty Bonnie, “Losing Humanity: The Case against Killer Robots” (19 November 2012) at 2, online: Human Rights Watch <https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots>.

9 Ibid.

10 Ibid.

11 International Committee of the Red Cross, “Expert Meeting 26–28 March 2014 report, Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects” (November 2014) at 5 online (pdf): International Committee of the Red Cross <https://www.icrc.org/en/document/report-icrc-meeting-autonomous-weapon-systems-26-28-march-2014>.

12 Ruth A David & Paul Nielsen, “Defense Science Board Summer Study on Autonomy,” Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Washington, D.C. 20301‐3140, (June 2016) at 4, online: <https://apps.dtic.mil/dtic/tr/fulltext/u2/1017790.pdf>

13 Piper Kelsey, “Death by algorithm: the age of killer robots is closer than you think, Vox, (21 June 2019), online:  <https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war>.

14 International Committee of the Red Cross, “International Humanitarian Law and the Challenges of contemporary armed conflicts: Report” (October 2015), 32IC/15/11, at 13, online:  <https://www.icrc.org/en/download/file/15061/32ic-report-on-ihl-and-challenges-of-armed-conflicts.pdf> [ICRC].

15 IEEE, supra note 6.

16 See generally Lewis, Dustin A. and Blum, Gabriella and Modirzadeh, Naz K., “War-Algorithm Accountability”, Harvard Law School Program on International Law and Armed Conflict (HLS PILAC) (31 August 2016). Online:  http://dx.doi.org/10.2139/ssrn.2832734.

17 Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” (2010) 9:4 Journal of Military Ethics 378.

18 Markus Wagner, “The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of the Autonomous Weapon Systems” (2018) 47:15-1 Vand J Transnat’l L 3.

19 ICRC, supra note 14 at 38.

20 Charter of the United Nations, 26 June 1945, Can TS 1945 No 7, art 2(4).

21 Ibid, art 51.

22 Milena Costas Trascasas & Nathalie Weizmann, “Autonomous Weapon Systems under International Law” (November 2014), Geneva Academy of International Humanitarian Law and Human Rights,, Academy Briefing No 8, at 10, online: <https://www.geneva-academy.ch/joomlatools-files/docman-files/Publications/Academy%20Briefings/Autonomous%20Weapon%20Systems%20under%20International%20Law_Academy%20Briefing%20No%208.pdf> [Geneva Academy].

23 Geneva Academy, supra note 22 at 13.

25 Ibid, art 41.

26 International Committee of Red Cross, Rule 47, IHL Database, Customary IHL, online:  <https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule47>.

27 Geneva Academy, supra note 22 at 11.

28 United Nations General Assembly, Code of Conduct for Law Enforcement Officials, Resolution 34/169,

adopted 17 December 1979.

29 United Nations Congress on the Prevention of Crime and the Treatment of Offenders, Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, adopted 7 September 1990.

30 Ibid.

31 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, UNTS 1342, at 137, 2 December 1983.

32 Convention on Certain Conventional Weapons, GAC, 18 September 1997, (entered into force 2 December 1983) online:  <https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/peace_security-paix_securite/conventional_weapons-convention_armes.aspx?lang=eng>.

33 2019 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), UNOG, online:  <https://www.unog.ch/__80256ee600585943.nsf/(httpPages)/5535b644c2ae8f28c1258433002bbf14?OpenDocument&ExpandSection=7%2C3#_Section7>.

34 Kathleen Harris, “Should Canada join the call for a ban on 'killer' robots?”, CBC (16 April 2016), online:

<https://www.cbc.ca/news/politics/canada-killer-robots-lethal-autonomous-weapons-un-1.3538411>.

35 Déclaration Nationale du Canada, Réunions d’experts sur les systèmes d’armes autonomes létaux Convention sur certaines armes classiques (CAC) Genève – du 11 au 15 avril 2016, online:  <https://www.unog.ch/80256EDD006B8954/(httpAssets)/3B4959531DA33F78C1257F920057C4A5/$file/2016_LAWS+MX_GeneralExchange_Statements_Canada.pdf> [translated by author].

36 Open Letter to the Prime Minister of Canada (2 November 2017), Ian Kerr & al, Canada Research Chair in Ethics, Law and Technology, University of Ottawa, “Call for an International Ban on the Weaponization of Artificial Intelligence”, online: <https://techlaw.uottawa.ca/bankillerai>.

37 Meeting of Experts on LAWS, “Canadian Food For Thought Paper: Context, Complexity and LAWS” (2016), United Nations Office at Geneva, Canada Working Paper, online: <https://www.unog.ch/80256EDD006B8954/(httpAssets)/C6F73401FA55F58FC1257F850043AB3A/$file/2016_LAWS+MX_CountryPaper+Canada+FFTP2.pdf>.

38 Canada - Thematic Debate Statement on Conventional Weapons - First Committee of the 72nd Session of the UN General Assembly, GAC (18 February 2018), online: <https://www.international.gc.ca/world-monde/international_relations-relations_internationales/un-onu/statements-declarations/2017-10-22-weapons-armes.aspx?lang=eng>.

39 Canada, National Defence, Strong, Secure, Engaged, Canada’s Defence Policy, 2017, at 73, online: <http://dgpaapp.forces.gc.ca/en/canada-defence-policy/docs/canada-defence-policy-report.pdf>.

40 Ibid at 55.

41 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, 18 September 1997, 2056 UNTS, (entered into force 1 March 1999).

42 Stuart Maslen & Peter Herby, “An international ban on anti-personnel mines: History and negotiation of the Ottawa treaty", (1998), 325 Intl Rev Red Cross, online: <https://www.icrc.org/en/doc/resources/documents/article/other/57jpjn.htm>.

43 Michael W Meier, “Lethal Autonomous Weapons Systems (Laws): Conducting a Comprehensive Weapons Review, (2016) 30:1 Temp Intl & Comp LJ 127 [Meier].

44 Ibid, at 132.

45 Meir, supra note 43 at 132.

46 Michael T Klare, “Autonomous Weapons Systems and the Laws of War”, Arms Control Association (March 2019), online: <https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war>.

47 Deborah Avant, “Where are the socially responsible companies in the arms industry?” (15 February 2013), online: PRI GlobalPost <https://www.pri.org/stories/2013-02-15/where-are-socially-responsible-companies-arms-industry>.

48 Ibid.

49 An Amazon Employee, “I’m an Amazon Employee. My Company Shouldn’t Sell Facial Recognition Tech to Police”, (October 16, 2018), online: Medium <https://medium.com/s/powertrip/im-an-amazon-employee-my-company-shouldn-t-sell-facial-recognition-tech-to-police-36b5fde934ac>.

51 Sundar Pichai, “AI at Google: our principles” (7 June 2018) online (blog):  The Keyword <https://www.blog.google/technology/ai/ai-principles/>. See also Google, “Responsibilities on AI Practices”, online: <https://ai.google/responsibilities/responsible-ai-practices/>.

52 AI Now Institute, “AI Now Report 2018”, (December 2018), at 9, online (pdf): <https://ainowinstitute.org/AI_Now_2018_Report.pdf>.

53 Supra note 15, p. 18.

54 Paul Scharre. Army of None: Autonomous Weapons and the Future of war, (New York: W.W. Norton and Company, 2018) at 362.