OpenAI agrees to provide models to Pentagon after Anthropic ban
OpenAI will let the Defense Department run its models on classified networks after President Trump barred Anthropic; Anthropic says it will legally challenge the designation.

OpenAI announced late Friday that it reached an agreement with the Defense Department to deploy its AI models on the Pentagon’s classified network, signaling a rapid shift in which firms supply advanced AI tools to the military. Sam Altman posted on X, writing, “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.” He added that “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
The deal followed an administration directive that federal agencies cease using Anthropic’s technology and a Pentagon move to designate Anthropic a “Supply-Chain Risk to National Security.” Defense Secretary Pete Hegseth posted on X: “In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
Anthropic has said it will legally challenge the designation. The dispute traces to negotiations in which Anthropic resisted restrictions the Pentagon sought, including prohibitions on using its models for domestic mass surveillance and for autonomous weapons; those talks were reportedly connected to a potential military contract worth up to $200 million. OpenAI, by contrast, has said it installed technical guardrails intended to prevent domestic surveillance in the United States and to block use with autonomous weapons, and that it agreed to let the Pentagon use its systems “for any lawful purpose.”
OpenAI’s public statements include a call for the Pentagon to offer the same contractual terms to other AI companies. Altman wrote, “We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.” He also told reporters that companies should work with the military “as long as it is going to comply with legal protections” and the “few red lines” shared across industry.

The Pentagon has also been reported to have agreements or arrangements with other AI firms, including xAI and Google, for classified use of their systems. Officials have not released full contract texts or technical specifications, and the precise differences between OpenAI’s arrangement and the terms Anthropic sought have not been publicly clarified. The lack of disclosed contract language leaves open questions about how guardrails will be enforced, how audits or oversight will function, and which uses will be classified as lawful.
The episode raises immediate policy and institutional questions: whether supply-chain risk designations will be applied as a tool in procurement disputes, how the Defense Department will balance operational requirements with civil liberties protections, and what role Congress and independent oversight will play in policing contracts for dual-use technologies. Dozens of OpenAI employees have publicly urged companies to resist uses that enable domestic surveillance or autonomous weapons, underscoring tensions between workforce ethics and defense partnerships.
As Anthropic prepares a legal challenge, the contest is likely to produce public court filings and push legislators and regulators to press for clearer rules on when and how commercial AI can be integrated into classified military systems. The outcome will shape procurement norms and civic accountability for how emerging AI capabilities are deployed in national security settings.
Know something we missed? Have a correction or additional information?
Submit a Tip

