OpenAI reaches important agreement with US War Department amid Trump-anthropic fallout
OpenAI has signed an agreement with the War Department to deploy its AI models on the department’s classified networks, company CEO Sam Altman said. The agreement comes amid growing tension between the Pentagon and rival AI firm Anthropic over the military use of AI.

OpenAI has struck a deal with the War Department to deploy its artificial intelligence (AI) models on its classified networks, company CEO Sam Altman announced on Friday (local time). The move comes at a time of growing confrontation between the Donald Trump administration and rival AI firm Anthropic over the military use of AI.
Explaining the outlines of the deal, Altman said OpenAI would establish strict technical and operational controls to ensure that its systems operate only within agreed limits.
❮❯
There are two core principles at the heart of the agreement, he said: a ban on domestic mass surveillance and a clear requirement that humans remain accountable for decisions involving the use of force, including in the context of autonomous weapons.
According to Altman, the War Department accepted these principles and incorporated them into the contract, aligning them with existing laws and policies.
Altman said OpenAI would deploy specialized “forward-deployed engineers” to work closely with the department to help monitor model performance and security. All deployments will run exclusively on secure cloud infrastructure, he said.
OpenAI has also urged the War Department to offer equal terms to all AI vendors, arguing that such safeguards should be an industry-wide baseline rather than the exception.
“We have expressed our strong desire to see things move away from legal and government actions and toward fair compromises. We are committed to serving all of humanity as best we can. The world is a complex, messy and sometimes dangerous place,” Altman said.
The agreement stood in contrast to the Pentagon’s increasingly adversarial relationship with Anthropic. The AI lab draws a hard line against allowing its systems to be used as fully autonomous weapons or for large-scale domestic surveillance. That stance prompted the War Department to plan and label Anthropic a potential supply-chain risk.
Tensions escalated further when Trump announced Friday that he was ordering all federal agencies to stop using Anthropic’s technology. In a post on Truth Social, Trump said the government would no longer do business with the company, though he allowed a six-month transition period for departments, including Defense, that currently rely on its products.
He accused Anthropic of trying to pressure the military and claimed its actions endangered US troops and national security.
For its part, Anthropic has defended its position. CEO Dario Amodei said the company would not ease the security restrictions built into its cloud AI systems, arguing that some applications fall outside ethical and technical boundaries.
Defense Secretary Pete Hegseth later said that the War Department would formally designate Anthropic as a supply-chain risk, a move that could effectively lock the firm out of defense-related contracts.
The divergent paths taken by OpenAI and Anthropic highlight a widening divide in how leading AI developers approach military partnerships, which could reshape the future role of AI in federal technology procurement and defense operations.
