OpenAI Strikes Pentagon Deal After Anthropic Fallout -
AI Watch.
Feb 28 2026/4 min read
OpenAI Strikes Pentagon Deal After Anthropic Fallout
OpenAI has agreed to deploy its AI models within the US Defense Department's classified network, following a public breakdown in the Pentagon's relationship with rival AI company Anthropic.
OpenAI CEO Sam Altman announced the agreement on Friday, saying it aligns with the company's principles against domestic mass surveillance and requires human oversight over the use of force, including in autonomous weapons systems. OpenAI also said it had built additional safeguards into the deployment to ensure its models behave as intended.
The announcement came hours after the Pentagon designated Anthropic a supply-chain risk — an unprecedented move against a US company. Dean Ball, a former Trump administration AI adviser, called the decision "attempted corporate murder."
In a follow-up statement on Saturday, OpenAI said its deal includes stronger safeguards than any previous classified AI contract, including Anthropic's. The company said its technology would not be used for mass domestic surveillance, to direct autonomous weapons, or to run high-stakes automated behavioral tracking systems. Unlike some competitors who rely solely on usage policies to define their limits, OpenAI said it maintains direct control over safety through contractual provisions and security-cleared staff working alongside government personnel.
OpenAI said it disagrees with the Pentagon's decision to designate Anthropic a supply-chain risk, and expressed hope that its agreement could ease broader tensions between the government and AI developers. "We don't know why Anthropic could not reach this deal, and we hope that they and more labs will consider it," the company said.
Anthropic, for its part, has drawn firm lines against its technology being used to surveil Americans or power fully autonomous weapons. Following the Pentagon's decision, the company said it would not change its position and pledged to challenge any formal supply-chain risk designation in court. CEO Dario Amodei described the move as "retaliatory and punitive" in a CBS News interview.
According to a person familiar with the negotiations, the Pentagon had offered Anthropic revised terms earlier in the week that incorporated some of its proposed language on surveillance and autonomy. Anthropic rejected those terms, however, believing they did not go far enough in preventing the department from setting those restrictions aside when it deemed it necessary.
One notable distinction in Altman's announcement is that OpenAI stopped short of Anthropic's explicit prohibition on fully autonomous weapons. His commitment to "human responsibility for the use of force" closely mirrors existing Pentagon policy, which calls for "appropriate levels of human judgment" in weapons use — language that critics argue leaves significant room for interpretation.
Defense Secretary Pete Hegseth gave Anthropic a six-month window to transition its Pentagon services to another provider. "America's warfighters will never be held hostage by the ideological whims of Big Tech," he wrote on X, adding that "this decision is final." Shortly after, President Trump ordered federal agencies to drop Anthropic's products.
The Pentagon has also separately reached a deal with Elon Musk's xAI to operate its Grok chatbot on classified cloud infrastructure.
OpenAI is already involved in a Pentagon program to develop voice-controlled autonomous drone swarm technology, where its models are used to translate voice commands into digital instructions. The company says this work is consistent with its usage policies.
The broader backdrop to this dispute is a competitive AI industry racing toward profitability. OpenAI recently closed a $110 billion funding round valuing the company at $730 billion. Anthropic raised $30 billion earlier this month. Both companies are reported to be considering initial public offerings.
Amodei previously worked at OpenAI before leaving in 2020, citing concerns that the company was prioritizing commercial growth over safety. OpenAI, which began as a nonprofit, updated its policies in 2024 to permit military applications and has since removed the word "safely" from its mission statement.

