Artificial Intelligence, Martens Clause, and Israeli Military Ethics in Gaza
Artificial Intelligence, Martens Clause, and Israeli Military Ethics in Gaza

Artificial Intelligence, Martens Clause, and Israeli Military Ethics in Gaza

This article has been written by Shushrut Devadiga, 5th Year Law Student at Kirit P. Mehta School of Law, Mumbai

Introduction

Though many countries race to adopt Artificial Intelligence within their military stratagem, Israel stands out for its extensive, open and enthusiastic adoption of AI within its military apparatus. Israel’s famous Iron Dome air defence system utilizes Artificial Intelligence to identify and intercept missiles. Another software, known as “the Gospel”, was used during Israel’s conflict with Hamas in 2021, which an Israeli spokesperson claimed could identify 100 targets (mainly alleged Hamas operatives) per day. However, a recent report by the +972 Magazine, allegedly uncovered Israel’s AI targeting system, “Lavender” developed by its elite Unit 8200 in determining targets for Israel’s current campaign in Gaza. This has, arguably, provided one of the first proper insights into the real-world application of such systems. But, considering the high level of casualties during the current war in Gaza compared to previous conflicts in the region, this paints a bleak picture for the ethical use of AI in armed conflict. This article therefore aims to examine the legal ramifications of Israeli military’s usage of Artificial intelligence under International Humanitarian Law (hereinafter “IHL”), especially through the lens of the Martens Clause. This clause, which is a part of the preamble of the 1899 Hague Convention, provides for protection during armed conflict in cases where there is an absence of applicable rules of IHL. To provide a comprehensive understanding, this article will explore the AI system used by the Israel Defence Forces in its war in Gaza and highlight its shortcomings. It will then analyse the ethical and legal implications under International Humanitarian Law, with an extensive discussion on the Martens Clause to assess its applicability to the current situation.

Understanding Israel’s use of Artificial Intelligence in Gaza

According to +972 Magazine’s reporting, Lavender is being utilised to identify suspected members of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking operatives, for elimination. The system was fed information collected through mass surveillance of the Gaza population and identified the likelihood of a person’s affiliation to the above-mentioned organizations by identifying certain characteristics, known as “features”. These features were determined based on data of known Hamas operatives, which would be applied to the rest of the population and ranked accordingly. While in lectures given by the commander of Unit 8200, it was stated that the military would set thresholds of the ranked results in order to determine the target, the ground reality as per +972 magazine’s report, appears to be startlingly different, as the constant demand for new targets has forced Lavender operatives to lower target threshold.

It found that Israeli operators were not required to properly assess Lavender’s findings as a measure to save time. It was also reported that the system incorrectly flagged individuals as Hamas operatives, based on communication patterns. In fact, one source mentioned that the check conducted was to verify the sex of the target, as it was assumed that there were no female Hamas or PIJ operatives.  It was also alleged that the thresholds mentioned above were lowered due to Israeli military’s constant demand for “more targets”.  

Another peculiar feature of Israel’s reliance on Lavender was the location of the strikes. +971 Magazine reported that Israel has, rather than attacking command centres, been targeting the identified Hamas/PIJ operatives in their private homes within civilian households, as the system found it much easier to identify family homes. This tactic, as admitted by Israeli Military sources themselves to +971 Magazine, has led to higher civilian casualties contrary to its public statement. Some of these targets were identified through a program known as “Where’s Daddy?” to track the alleged operatives and target them when they entered their households.

Applicability of Martens Clause

Marten’s clause forms an integral part of laws on armed conflict and first appeared in the preamble of the 1899 Hague Convention. It has also been included in Additional Protocol I (AP I) to the Geneva Conventions, which established international legal standards for humanitarian treatment in conflicts. Article 1(2) of the protocol states:

“In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience”

Considering the novel nature of Artificial Intelligence, there are no explicit provisions to govern it with the Geneva Conventions, but the Martens Clause provides a crucial ethical and legal framework for regulating new technologies like AI. However, since the clause’s existence, there have been various scholarly debates surrounding the application of this particular clause, highlighted by the International Court of Justice’s Advisory Opinion on the Legality of the Threat or Use of Nuclear Weapons (ICJ Advisory Opinion). The majority opinion did not delve much into the clause, with it iterating that the clause has “proved to be an effective means of addressing the rapid evolution of military technology”.  It also emphasised that the principles espoused by the clause is a part of the customary international law and jus cogens. Judge Shahabuddin, in his dissenting opinion, went further by emphasising a wide, dynamic and normative interpretation of the Martens Clause. He stated that:

“Martens Clause provided authority for treating the principles of humanity and the dictates of public conscience as principles of international law, leaving the precise content of the standard implied by these principles of international law to be ascertained in the light of changing conditions, inclusive of changes in the means and methods of warfare and the outlook and tolerance levels of the international community.”

The dissenting opinion was also critical of the narrow interpretation submitted within the arguments of some states party to the case. The United Kingdom argued that the clause was only applicable in the absence of any specific treaty provisions covering the particular issue, with it stating in its written comments that “It is, however, axiomatic that, in the absence of a prohibitive rule opposable to a particular State, the conduct of the State in question must be permissible.” Judge Shahabuddin instead presented that the word “remain” within the clause was indicative of its normative character. He further pointed out the paradox of the above interpretation by pondering, “It is difficult to see what norm of State conduct it lays down if all it does is to remind States of norms of conduct which exist wholly dehors the Clause.” The normative characteristics were also recognized by the European Court of Human Rights in Kononov v. Latvia.

The interpretation of the term public conscience is a difficult exercise as it introduces a widely subjective element. Michael Salter states the clause acted as “a translator of moral imperatives into concrete legal outcomes” as evident in the Nuremberg trials.

Judge Shahabuddin’s dissent in the ICJ Advisory Opinion provides some relevant insights into interpretation of the term. While agreeing with the ICJ that “the intrinsically humanitarian character of the legal principles” governs IHL and is applicable to all forms of weapons and warfare, he felt that the court could have taken this further by taking the view that “the public conscience could consider that no conceivable military advantage could justify the degree of suffering caused by a particular type of weapon.”

“Principles of Humanity” is a widely accepted principle under IHL, prohibiting combatants from inflicting suffering, destruction or injury in excess of what is required to achieve its legitimate objective.  It has, to an extent, been codified through the use of the term “humane treatment” under Geneva Convention I and II, which seeks to protect their physical integrity, honour, mental state, etc.

The current Israeli use of Artificial Intelligence appears to be contrary to the principles espoused within the Martens Clause, including principles of humanity, especially with regard to how it selects its targets. It must be noted that Artificial Intelligence is an inherently unpredictable system “because they are constantly ‘learning’ and adapting to their surroundings”. Any error in inputting data into such software or any systemic malfunction will lead to civilian casualties and needs to be corrected through Human Oversight. However, +972 magazine’s report highlights the potential lack of such oversight, due to the lack of strong institutional checks and the constant demand to produce more targets. This is not surprising considering the promise of Artificial Intelligence was to make one’s job easier, even if it involved difficult choices. As one of the sources in the +972 Magazine admitted, “The machine did it coldly. And that made it easier.” Given the large number of targets being generated by the system and limited manpower, it is next to impossible to oversee the system adequately.

Therefore, it can be argued that Israel’s use of Artificial Intelligence, and maybe the very usage of such systems in warfare, may be in contravention to the Martens Clause. It violates the principle of humanity under IHL by inflicting excess injury in comparison to Israel’s military aims, evidenced by the large death toll resulting from the current conflict. Furthermore, it is difficult to say that the current public conscience could consider any conceivable military advantage provided by the use of Artificial Intelligence and justify the degree of suffering caused in Gaza.

Conclusion

The use of Artificial Intelligence in the Gaza Conflict by the Israeli military raises significant ethical and legal concerns. The alleged use of “Lavender” and other systems to identify and target individuals, often in residential areas, based on characteristics obtained from mass surveillance, may have led to numerous erroneous civilian casualties at rates though unjustifiable, that have been considered acceptable by the Israeli military. This is in contravention of the principles enshrined in the Martens Clause. However, it must also be recognized that this provision has never been utilized to curb the use of any weapons previously, as can be seen in the ICJ Advisory opinion (even though the ICJ had admitted that it can be a useful tool to regulate new developments under IHL). Since the technology is at a nascent stage, it has not been regulated under international law. However, current international proceedings within the International Court of Justice and International Criminal Court involving Israel’s action in Gaza provide an opportunity to establish legal precedents and frameworks to address the ethical and humanitarian challenges posed by AI in warfare, an opportunity both forums should not miss.

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views or positions of MNLUM Law Review Blog.)

Leave a Reply

Your email address will not be published. Required fields are marked *