top of page
  • Amogh Pareek

Robotic Roguery: Analysing the Legality of Autonomous Weapons (Killer Robots) vis-à-vis......

This post has been authored by Amogh Pareek, third-year student at National Law University, Jodhpur, pursuing B.B.A., LL.B. (Hons.) course. This article was also published on Oxford Human Rights Hub.

Robotic Roguery: Analysing the Legality of Autonomous Weapons (Killer Robots) vis-à-vis Principles of International Law


The outrage against autonomous weapon systems (“Killer Robots”) is at an all-time high. Recently, the UN Secretary-General Antonio Guterres has called for an international ban [1] on Killer Robots, calling their use “morally despicable”. In addition, Mary Wareham, the Director of the Human Rights Watch advocated a similar view and described [2] Killer Robots as “one of the most pressing threats to humanity”. Globally, over 22 nations, 116 AI and robotics companies [3] and over 3000 robotics experts and scientists [4] including the likes of Elon Musk and Stephen Hawking have urged the UN to put a ban on Killer Robots.

Amidst such uproar, the question concerning legality of autonomous weapons gains prime importance.


The International Committee of the Red Cross (“ICRC”) defines [5] autonomous weapon systems as “Autonomous weapon systems (also known as lethal autonomous weapons or “killer robots”) independently search for, identify and attack targets without human intervention.” Given the highly autonomous nature of such weapons, they potentially violate a host of international law principles.

First, Killer Robots violate the Martens Clause [6], which prescribes that in cases not covered by treaties and traditional customary international law (such as the case of Killer Robots wherein there is no governing law or treaty [7]), principles of humanity and dictates of public conscience will apply. The International Court of Justice (“ICJ”) has also advocated Martens Clause in the Nuclear Weapons case [8] as “an effective means of addressing the rapid evolution of military technology”. While ‘principles of humanity’ [9] refer to the principles of distinction and proportionality (which will be discussed below), ‘dictates of public conscience’ [10] can be determined by looking the opinion of the public and opinion of experts. As already stated above, use of Killer Robots is opposed not only by the general public including 20 Noble Peace Laureates [11], over 60 NGOs [12] all across the globe, and over 160 religious leaders [13], but also by more than 3000 robotics experts and scientists [14] – clearly establishing that the use of Killer Robots is violative of, and illegal according to the Martens Clause.

Second, the use of Killer Robots violates the principle of distinction [15], which requires autonomous weapons to have an ability to distinguish between combatants and non-combatants including hors de combats (personnel unable to combat due to injury etc.). It is enshrined in Articles 48, 51, 52, 53, 54, and 57 of the Additional Protocol-I to the Geneva Convention [16] – all of which reinforce that parties to a conflict must at all times distinguish between civilians and military, and direct their attack only against military objects. Given their highly autonomous nature, Killer Robots cannot make a fine distinction [17] between military and civilians. The problem is further compounded since they’re pre-programmed, and human intervention in decision making is absent [18]. Thus, Killer Robots violate the principle of distinction.

Third, use of Killer Robots falls foul of the the principle of proportionality [19], which prohibits attack against a legitimate military targets if the collateral civilian harm is excessive. Further, as evinced by Articles 51(5)(b) [20] and 57(2) [21], the proportionality principle requires a subjective assessment of the battlefield to ensure that harm to civilians is minimized. However, Killer Robots fail to make such a subjective assessment, and end up causing great collateral civilian damage. This makes their use more likely to result in much greater human cost [22], as while they may be able to correctly identify an enemy target, they cannot possess the capability to take into account the plethora of variable factors [23] that are necessary to be considered before making an attack – such as the number of civilians in the area and the effect of attack on enemy target on them.

Fourth, Killer Robots and their usage may utterly disregard the principle of military necessity [24]. It states that only such degree of force should be used, which is required for the legitimate purpose of the conflict i.e. complete or partial submission of the enemy, with minimum expenditure of life and resources. It prohibits inflicting such destruction or injury which is unnecessary for the reasonable purposes of the conflict. However, Killer Robots are incapable of making such decisions. For instance, an autonomous robot cannot determine if an enemy shot by it has merely been knocked down on the ground, or is faking an injury, or has become direly wounded and incapacitated to the extent of no longer being a threat. Due to such inability, the robot may unnecessarily shoot the person a second time, thereby disregarding the principle of necessity.

Fifth, the use of Killer Robots makes attribution of criminal responsibility a huge conundrum. This problem rests on a multitude of issues. Firstly, Killer Robots lack the necessary constituent of criminal responsibility i.e. mens rea or the mental element, as a machine cannot be said to possess the ‘necessary intent’ to commit a crime. Secondly, such robots do not come within jurisdiction of most courts which are only empowered to try “natural persons”. Thirdly, any judgment determining the guilt of the robots would be largely ineffective, as no punishment can deter other robots from committing the same crime given their inanimate nature. Finally, the question of attribution of liability itself is indeterminate – as no legislation or framework exists which could be used to fixate the liability on an individual – whether it would be imposed on the robot manufacturer, or the programmer, or on the commander responsible for deployment.


Upon a culmination of the aforementioned assertions, it becomes abundantly clear that the current state of autonomous weapon technology is plagued with myriad issues ranging from being moral, to ethical and to legal in nature. Moreover, the existing legal framework suffers from great infirmities in adequately providing for a comprehensive regime governing the use of autonomous weapons. For these very reasons, it becomes imperative that till the time the gap between the state of autonomous weapon technology and the laws governing them is abridged, and a competent legal framework is derived, States and international organizations should resolve to put a pre-emptive ban on the production and use of Killer Robots.


























2 views0 comments


bottom of page