US news

(AI)mageddon: Who is Liable When Autonomous Weapons Attack?

The increasing use of Autonomous Weapons Systems (AWS) in military applications raises concerns about accountability for potential violations of international humanitarian law. Despite growing concerns and international regulations, the US prioritizes developing its AI capabilities. Clear accountability mechanisms are urgently needed to prevent potentially devastating consequences.
By
drone

Military combat drone UAV launching missiles 3d render © PHOTOCREO Michal Bednarek / shutterstock.com

August 02, 2024 06:53 EDT
Print

The time is upon us — the integration of artificial intelligence (AI) into military applications, commonly referred to as autonomous weapons systems (AWS), has surged dramatically over the past few decades. Sci-fi classics like The Terminator may be closer to reality than we thought. A stark statistic illustrates this trajectory: From 2000 to 2010, the number of US-owned unmanned aerial vehicles skyrocketed from fewer than 50 to over 7,000. These advancements present grave concerns about accountability for AWS violations of international humanitarian law (IHL).

US sluggish to ban or regulate lethal AWS

Current US policy frameworks offer only piecemeal solutions to address accountability for AWS. In November 2023, the US Department of State released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” The declaration stated the aim to “build international consensus on responsible behavior and guide states’ development, deployment, and use of military AI.” Over 50 countries endorsed the declaration.

To date, approximately 30 countries and 165 nongovernmental organizations have called for a preemptive ban on lethal AWS. The US however, is not among them.

Furthermore, despite the US Department of Defense (DOD) issuing several directives aimed at regulating the use of AWS, these measures are fragmented and lack teeth to ensure comprehensive accountability. 

Additionally, the US has also resisted international bans and stringent regulations on military AI applications, focusing instead on funding for venture capital projects to enhance AI capabilities in defense sectors. 

The DOD continues to dominate in AI contracts, with an astounding growth from 254 contracts to 657 in the last year alone. From a potential value perspective, defense AI contracts surged from $269 million, comprising 76% of all federal funding, to $4.323 billion, a staggering 95% of federal AI funding. This massive investment underscores the US commitment to advancing military AI technology, but it also highlights the urgency of establishing robust legal and ethical frameworks to govern its use.

As technology evolves, international law must evolve too

On a different note, legal scholars have long raised poignant concerns about the accountability vacuum surrounding AWS-driven IHL violations. Professor Oren Gross of the University of Minnesota Law School emphasizes that the delegation of life-and-death decisions to machines without clear accountability mechanisms poses significant legal and ethical challenges. He asks, “If [lethal autonomous robots] (LAR) does, in fact, act in an independent, autonomous fashion, does this not disconnect the causal link between human agents and the crimes committed by the LAR?” 

Furthermore, the novelty of this legal landscape exacerbates these concerns, leaving many questions unresolved. Israel is a particularly apt example of this ambiguity, given recent reports on the Israeli Defense Forces’ use of AI targeting systems. The exceptional position Israel holds in US law — manifested in unique exemptions concerning funding and arms sales transparency — adds another layer of complexity to AWS accountability mechanisms.

Several case studies illustrate the dire consequences of unchecked use of AWS in military operations. For instance, autonomous drones operating without adequate oversight have been implicated in erroneous strikes that resulted in civilian casualties. Therefore, legal and AI scholars have urgently called for addressing these accountability gaps, emphasizing that without proper legal and ethical frameworks, the deployment of AI in warfare could lead to uncontrollable and devastating outcomes.

To address the pressing need for accountability in AWS, policymakers, legal experts and international organizations must work together to strengthen legal frameworks. This includes drafting and agreeing to clear regulations that delineate responsibility for AI-driven actions in warfare to ensure that all stakeholders are held accountable for any violations. Implementing these measures will be undoubtedly challenging as resistance from powerful defense lobbies and the inherent difficulties of achieving international consensus are prospective barriers.

International cooperation is crucial to bridge the legal gaps surrounding AI in warfare. It is only through consensus — building efforts that global standards of  transparency, accountability and oversight can be adhered to. By learning from other AI-regulated industries, such as the automotive sector’s efforts to regulate autonomous vehicles and adapting those lessons to a military context, the international community can better safeguard against harms of AI technologies in warfare.

As AWS continues to proliferate in military operations, the importance of establishing clear accountability mechanisms cannot be overstated. The future of warfare may be increasingly autonomous, but it must also come to be  increasingly governed by stringent legal and ethical principles to ensure global security and sanctity of human rights.

[Ainesh Dey edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member