October 10, 2024


New book anticipates a world of military robots—and the need to regulate them
Credit: Matthew Modoono/Northeastern University

In our digitally mediated world, the atrocities of war are hard to ignore. Conflagrations in Europe (Ukraine–Russia), the Middle East (Israel–Hamas) and elsewhere relay images of death and destruction as quickly as our feeds can process them.

As advances, the weapons of war grow evermore capable of killing people without meaningful human oversight, raising troubling questions about the manner today’s and tomorrow’s wars will be carried out, and how autonomous weapons systems could weaken accountability when it comes to the potential violations of international law that attend their deployment.

Denise Garcia, professor of political science and , condenses these grim realities into a new book on the subject titled “The AI Military Race: Common Good Governance in the Age of Artificial Intelligence.” The book explores the challenges in “creating a global governance framework” that anticipates a world of rampant AI weaponry systems against the backdrop of the deterioration of international law and norms—indeed, a world increasingly descriptive of the one in which we now live.

Speaking to Northeastern Global News, Garcia, who sat on the International Panel for the Regulation of Autonomous Weapons from 2017 to 2022, noted that AI military applications have already been deployed in the ongoing conflicts in Europe and the Middle East—one of the most famous examples being Israel’s Iron Dome.

Indeed, the possibility that lethal autonomous weapons systems may soon be deployed on the battlefield presents an urgent need to take collective action in the form of policies, treaties and specific technology bans, she says.

“The world must come together and create new global public goods, which I would argue needs to include a framework to govern AI, but also commonly agreed rules on the use of AI in the military,” Garcia says.

Garcia says the acceleration of AI technology has implications beyond conduct on the battlefield as well—spilling over into national security. In 2021, the U.S. National Security Commission on Artificial Intelligence, urged that the U.S. continue rapid development of AI to safeguard national security and remain competitive with Russia and China.

But Garcia has argued that accelerating militarized AI as such is not the right approach, and risks adding more volatility to an already highly unstable international system. She argues that the U.S. commission’s report “resurrected” the type of Cold War-era thinking and strategy that led to the accumulation of more than 70,000 nuclear weapons during that period.

Instead, she says the U.S. should continue pushing for a decrease in nuclear arsenals, while developing standards that keep human beings firmly in control of military and battlefield decisions—a case she lays out in meticulous detail in the book.

“Simply put, AI should not be trusted to make decisions about warfare,” Garcia says.

Many academics agree. Some 4,500 AI and robotics researchers have said collectively that AI should not make decisions with respect to the killing of human beings—a position, Garcia notes, that aligns with European Parliament guidelines and the European Union regulation. U.S. officials, however, have pushed for a regulatory paradigm of rigorous testing and design such that human beings can use AI technology “to make the decision to kill.”

“This looks good on paper but is very hard to achieve in reality, as algorithms are unlikely to be able to assimilate the vast complexity of what happens in war,” Garcia says.

Not only do AI weapons systems threaten to upend norms of accountability under international law, but they also make prosecuting war crimes that much harder because of problems associated with attributing “combatant status” to military AI technology, Garcia says.

“International law—and laws in general—have evolved to be human-centered,” she says. “When you insert a robot or a software into the equation, who will be held responsible?”

She continues, “The difficulties of attribution of responsibility will accelerate the dehumanization of warfare. When humans are reduced to data, then human dignity will dissipate.”

Existing AI and quasi-AI have already made waves in defense circles. One such application lets a single person control multiple unmanned systems, according to one source, such as a swarm of drones capable of attacking by air or beneath the sea. In the war in Ukraine, loitering munitions—uncrewed aircraft that use sensors to identify targets, or “killer drones”—have generated debate over precisely how much control human agents have over targeting decisions.

This story is republished courtesy of Northeastern Global News news.northeastern.edu.

Citation:
New book anticipates a world of military robots, and the need to regulate them (2023, December 4)
retrieved 4 December 2023
from https://phys.org/news/2023-12-world-military-robots.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *