Prospects Have Dimmed for Autonomous Weapons Guardrails

Middle powers must advocate for responsible use even as military heavyweights lose interest.

April 30, 2026
Hiebert, Kyle - US-China Autonomous Weapons v2
Ghost Robotics Vision 60 takes part in an annual military drill by the Japanese Ground Self-Defense Force in Funabashi, Japan. (Kim Kyung-Hoon/REUTERS)

“We’re going to go on offense, not just on defense. Maximum lethality, not tepid legality. Violent effect, not politically correct.” That was US Secretary of Defense Pete Hegseth’s searing message last September when US President Donald Trump superficially rebranded the Department of Defense (DOD) as the “Department of War.” That so-called warrior ethos has since manifested far beyond America’s punishing campaign in Iran.

The same mindset is driving a surge in global US military adventurism, while also stoking a desire for unfettered access to new weapons technology powered by artificial intelligence (AI). But China is not far behind. In fact, in some areas of capability, it’s way ahead. Both great powers are meanwhile abandoning initiatives meant to prevent the most dystopian outcomes of AI warfare. Fears include the dehumanization of war or seeing the technology acquired by extremist groups.

And then there’s Russia. The Kremlin has relentlessly violated international rules in its failed conquest of Ukraine, which has become a laboratory for robot combat. “We understand that for many delegations the priority is human control,” a Russian diplomat said during a May 2023 debate on AI arms regulations at the United Nations. “For the Russian Federation, the priorities are somewhat different.”

“The growing gap between international dialogue on military AI, which tends to emphasize risks and potential constraints on its use, and the accelerating efforts of militaries worldwide to integrate AI should be concerning to all nations,” wrote Michael C. Horowitz in February 2026 for the Council on Foreign Relations, where he is a senior fellow. His warning came after China, Israel, Russia, the United States and other regional military powers refused to sign a joint declaration at the third Responsible AI in the Military Domain (REAIM) summit, held in Spain.

“As UN efforts to create binding regulations on military AI intensify, particularly for autonomous weapon systems,” cautioned Horowitz, “multilateral negotiations run the risk of their efforts becoming increasingly disconnected from on-the-ground realities.”

As these systems proliferate, there’s an enormous risk that military decision making moves beyond the limits of human cognition. Without autonomous weapons use being anchored within shared guardrails of any sort, an escalatory spiral involving dire accidents and miscalculations will likely soon follow.

America First — Safety Last?

More than 60 countries signed the Blueprint for Action at REAIM’s 2024 summit in Seoul, including the United States. China abstained — although it did endorse the joint declaration at REAIM’s first summit in The Hague in 2023. Yet only 35 of 85 participating countries signed REAIM’s Pathways to Action agreement in 2025.

The pact sought to operationalize concepts workshopped at the previous two gatherings. It also integrated recommendations from a stand-alone report published in September 2025 by a global commission of experts informing REAIM’s work. Military AI systems, the report suggested, must comply with international law and embed human responsibility across their entire life cycle — from design and testing to real-world oversight. Critically, this includes barring AI programs from ever being able to authorize nuclear strikes. A permanent, multilateral dialogue on military AI was recommended as well, alongside a network of specialists tasked with knowledge dissemination and capacity building.

Yet advocating for such ideal conditions increasingly resembles screaming into the void.

Trump’s 2025 National Security Strategy vows America will dominate autonomous weapons systems. Secretary Hegseth then issued a memo in January 2026 outlining the Pentagon’s new AI acceleration strategy. Its diktats include “aggressively identifying and eliminating bureaucratic barriers” to the US military’s use of AI tools. Agentic programs, the memo says, will be used for everything from campaign planning to “kill chain execution.”

This is already happening. The Wall Street Journal and Axios both reported that hours after the Pentagon, on February 27, designated Anthropic a supply-chain risk — banning it from doing business with the vast spectrum of US military contractors and suppliers — American forces relied on Anthropic’s Claude large language model (LLM) for their operations in Iran, given Claude’s vital functions within Palantir’s overarching Maven Smart System military decision-making platform. Tasks included intelligence assessments, planning military strikes and gaming out battlefield situations, the outlets said. Claude was also reportedly key to the overnight raid that seized Venezuela’s leader, Nicolás Maduro, in early January.

However, the relationship between Anthropic and the Pentagon melted down after Anthropic refused to license its AI models for mass domestic surveillance and fully autonomous lethal weapons — at least, Anthropic insisted, until its models could ensure killer robots wouldn’t misfire on civilians or friendly soldiers. Anthropic has long been the only company allowed within classified US defence networks. And while the Pentagon was given six months to strip Anthropic from its systems, the same day that Anthropic was blacklisted, the US DOD and OpenAI swiftly struck a replacement deal. The executive overseeing OpenAI’s robotics team quit in protest days later. Elon Musk’s xAI is newly providing its Grok model for US military use as well.

Legal experts insist the White House’s grievances with Anthropic will crumble in court. Still, the Pentagon has clearly moved on, creating an alarming precedent.

“If you’re pursuing immediate tactical advantages and signaling to these companies that you will provide very beneficial contracts to the first company that is willing to cross those ethical lines, it starts a race to the bottom,” Luke Barnes, a research scientist at New York University, told Foreign Policy as the rift deepened between Anthropic and the Pentagon. This will reverberate not only in the United States, Barnes predicted, but in rival countries as well. “That potentially creates a really dangerous global dynamic.”

Keeping Dialogue on Life Support

OpenAI’s national security chief posted on X the company’s deal is sound because it limits models’ use to cloud-based applications — barring them from “edge” deployments in the battlefield. “Autonomous systems require inference at the edge,” she explained, referring to how LLMs process and respond to inputs. “By limiting our deployment to cloud API [application programming interface], we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”

The dark irony is that the Anthropic saga has boosted momentum for defence technology start-ups striving to do just that.

China, for its part, has long held a strategically ambiguous position on autonomous weapons. Beijing has even expressed on-and-off support for some type of international ban. But that rhetorical hedging can be read any which way. And it hasn’t stopped the People’s Liberation Army (PLA) from racing ahead with attempts to automate nearly every aspect of warfare.

Researchers from Georgetown University’s Center for Security and Emerging Technology recently outlined findings from their analysis of thousands of PLA procurement requests since 2022. They identified how the PLA is “prototyping AI capabilities that can pilot unmanned combat vehicles, detect and respond to cyberattacks, track seaborne vessels and identify and strike targets on land, at sea, and in space.”

Part of the motivation behind this, the researchers suggest, is to offset the growing distrust China’s political leaders have in their military brass. More than half of the PLA’s top officers have been purged since Chinese President Xi Jinping launched a corruption crackdown in 2022.

The propaganda value shouldn’t be underestimated either. A recent YouTube video posted by state broadcaster China Central Television (CCTV) featured a computer rendering of a conceptual autonomous orbital mothership designed to launch drones and missiles from space. Another CCTV video from July 2025 showcases armed robotic dogs navigating various environments in combat simulation drills.

Meanwhile, UN-organized talks on possible prohibitions and regulations may be nearing their twilight. In early March 2026, the UN Convention on Certain Conventional Weapons (CCW) launched fresh negotiations in Geneva involving 128 member states. Such dialogue has been ongoing for more than a decade — to no avail. This phase of meetings will last until September, when the CCW’s current three-year mandate to examine autonomous weapons ends. A review conference in November will then decide what to do with the CCW’s rolling draft text going forward.

With the international order violently in flux, it’s magical thinking to believe rogue hegemons will suddenly cede any technology that advances their self-interest. But that doesn't mean the rest of the world should stand idle. Middle powers, such as those who signed REAIM’s Pathways to Action, must forge ahead on a code of conduct and accountability framework around autonomous weapons systems and preserve it for the future — when military powers realize their value. That would still be progress.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Based in Montreal, Canada, Kyle Volpi Hiebert is a Digital Policy Hub visiting fellow and a researcher and independent political risk analyst focused on globalization, conflict and emerging technologies.