Anthropic wanted to use cloud AI to coordinate US drone swarms in Pentagon challenge

Anthropic wanted to use cloud AI to coordinate US drone swarms in Pentagon challenge

Anthropic wanted to use cloud AI to coordinate US drone swarms in Pentagon challenge

Anthropic had submitted a proposal for a $100 million Pentagon challenge aimed at developing voice-controlled drone swarm technology, even as it negotiated limits on the military use of its AI systems. The company was not selected amid broader debate over human control in autonomous weapons.

Advertisement
Anthropic wanted to use cloud AI to coordinate US drone swarms in Pentagon challenge
Anthropologie proposed voice-controlled drone swarm technology for US Pentagon award challenge (Photo: Reuters)

Anthropic, one of the world’s most-watched AI companies, submitted a proposal to a US Pentagon competition earlier this year that aimed to build voice-controlled drone swarm technology. The submission was part of a $100 million prize challenge designed to accelerate research into autonomous military systems that can operate in the air and sea, according to people with knowledge of the development.

Advertisement

❮❯

The move is notable as it comes at a time when Anthropic is in delicate discussions with the US Department of Defense over how its AI models could be deployed in military settings. The company has consistently said that while it supports legitimate military applications of AI, it does not support the development of fully autonomous weapons that can independently select and attack targets, nor does it support large-scale domestic surveillance.

Anthropic wanted to use cloud AI to coordinate US drone swarms in Pentagon challenge

People familiar with the proposal told Bloomberg that Anthropic did not view entering the competition as overstepping its internal boundaries. The submission reportedly focused on using its cloud AI models to interpret a commander’s spoken intent and convert it into structured digital commands. These instructions will then be used to coordinate a fleet of drones working together as a swarm.

Importantly, the AI ​​system was not meant to make decisions regarding targeting or weapon release. Instead, human operators will maintain monitoring of the system and have the ability to intervene or stop operations if necessary. In other words, the company’s role would have focused on enabling coordination and communication rather than delegating lethal decision-making to software.

The Pentagon’s competition is structured as a phased program. It starts with software development before moving on to real-world testing on a live platform. The initiative is for offensive capabilities, with the human-machine interface expected to impact “the lethality and effectiveness of these systems,” according to a January announcement by defense officials. The initial phase focuses on building software capable of synchronizing drone activities across multiple domains, including air and maritime environments. Later stages envision features such as target awareness, information sharing, and eventually systems that can operate “from launch to termination”.

Human oversight at center of AI defense debate in US

Despite its participation, Anthropic was not selected for the initial round of winners. The reasons for this have not been made public.

The context became more complicated on Friday when US Defense Secretary Pete Hegseth directed the Pentagon to prevent contractors and their partners from engaging in commercial transactions with Anthropic. The order came after tensions over how AI companies should be involved in defense projects involving autonomous capabilities.

Advertisement

Anthropic has previously argued that AI systems are not yet reliable enough to independently operate autonomous weapons platforms. Additionally, company officials have said they are prepared to support military use cases that comply with international law and preserve meaningful human control.

The drone swarm challenge was jointly launched by the US Special Operations Command, which oversees the Defense Autonomous Warfare Group, and the Defense Innovation Unit. The program is expected to advance in phases based on the progress and continued interest of the participants.

Other technology players fared better in the selection process. A proposal involving SpaceX and XAI was also among those selected. Two defense technology firms that listed OpenAI as AI partners were also selected. One of the bids was led by Applied Intuition, known for its work on autonomous vehicle systems.

In the case of OpenAI, its tools will reportedly assist in mission control tasks by converting voice input into digital operational commands. A company spokesperson said any involvement would remain within its established usage policies.

Shortly after the Pentagon’s move regarding Anthropic, OpenAI Chief Executive Sam Altman announced a new agreement with the Department of Defense to deploy OpenAI’s systems on classified cloud infrastructure. According to Altman, the arrangement would require “human accountability for the use of force, including autonomous weapons systems.”

While armed forces around the world already deploy multiple drones simultaneously, organizing large numbers of them into an intelligent, coordinated swarm remains technically demanding. Such systems must manage communications, navigation, and adaptability in dynamic combat environments. The Pentagon’s challenge underscores how quickly defense agencies are moving to use AI for these complex tasks, even as companies like Anthropic attempt to draw ethical lines around their involvement.

– ends
tune in

Zeen Subscribe
A customizable subscription slide-in box to promote your newsletter
[mc4wp_form id="314"]