Pentagon agrees to OpenAI red lines in principle, Anthropic faces $200M risk
Pentagon officials have accepted OpenAI’s safety red lines for classified use in principle; no contract is signed and Anthropic risks losing a reported $200 million award.

The Pentagon has agreed in principle to OpenAI’s safety “red lines” for deploying its large models in classified settings, though no contract has been signed and negotiators are still drafting legal and operational language, people involved in the talks say. The move follows a public standoff with Anthropic that has put a reported $200 million Department of Defense award for agentic workflows at risk.
OpenAI’s CEO Sam Altman has outlined the company’s nonnegotiable limits: no mass surveillance, no autonomous lethal weapons and humans remaining in the loop for high‑stakes automated decisions. “AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high‑stakes automated decisions. These are our main red lines,” Altman told staff in a memo. He also told employees the company would seek a contract that “covers any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”
Under the terms being negotiated, OpenAI would retain control over technical safeguards, which models are deployed and where they run, and would limit DoD deployments to cloud environments rather than edge systems such as aircraft or drones. OpenAI would also preserve the ability to strengthen security and monitoring systems as it learns from real‑world deployments, negotiators said.
Anthropic’s refusal to remove similar safeguards from its Claude model triggered pressure from Defense Department officials, who insisted models be available for “all lawful purposes.” Pentagon officials reportedly warned Anthropic it could lose the contract if it did not comply, and gave the company a hard deadline — cited by intermediaries as 5:01 p.m. ET on Friday — to reach a deal. Defense sources signaled that refusal could lead to punitive steps including a supply‑chain risk designation.

The dispute has spilled into public and corporate politics. Employees at OpenAI and Google circulated an open letter urging executives to resist Pentagon pressure; the letter charged that negotiators “are trying to divide each company with fear that the other will give in.” Sam Altman, in a CNBC interview, said he trusts Anthropic as a company and was uncertain how the dispute would resolve. At the same time, a senior Pentagon negotiator publicly attacked Anthropic’s leader, calling him a “liar” with a “God‑complex” who was “putting our nation’s safety at risk.” The Pentagon’s top spokesman warned on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions.”
The clash raises immediate procurement and operational questions for the Defense Department. A deal that embeds vendor red lines into contract language would mark a departure from traditional government demands for unfettered access and could set a precedent for cloud‑only deployments and tighter vendor control over model use. For Anthropic, losing the DoD relationship could mean both financial fallout and reputational damage; for OpenAI, an agreement would open access to classified environments while requiring the company to police downstream use.
Officials from the Pentagon and OpenAI have not finalized a contract and discussions over precise enforcement, auditing and liability provisions are ongoing. The outcome will influence how commercial AI vendors balance product safety commitments with the operational demands of national security customers.
Know something we missed? Have a correction or additional information?
Submit a Tip

