Pentagon used Anthropic’s Claude AI in Iran strikes despite federal phaseout
CBS reports the Pentagon deployed Anthropic’s Claude AI in Iran strikes, even as President Trump ordered a six-month phaseout and officials scramble to replace the system.

CBS reported March 3 that two sources familiar with U.S. military operations say the Pentagon used Anthropic’s Claude AI model over the preceding weekend to assist strikes on Iran and continued to use the system afterward, even though the White House has ordered federal agencies to stop using Anthropic products and allow six months to phase them out. The disclosure intensifies a policy fight between the company and the Defense Department over guardrails for military applications of large language models.
“The U.S. used Anthropic’s Claude AI model over weekend for the attack on Iran — and is still using it,” two sources told CBS. The network said the use was first reported by The Wall Street Journal and noted earlier reporting that the U.S. employed Claude in the operation that captured Venezuelan leader Nicolás Maduro from his Caracas compound earlier this year.
The deployment comes amid a dispute in which Anthropic pressed for explicit limits preventing its models from enabling mass domestic surveillance or powering fully autonomous weapons. Anthropic’s public usage policy prohibits domestic surveillance or weaponization, and company representatives have told Pentagon officials they were worried Claude could be used to spy on Americans or guide weapons without sufficient human oversight, Reuters reported.
Pentagon leaders pushed back, arguing the department should be able to use Claude for “all lawful purposes,” and that existing law and internal policy already ban mass surveillance of U.S. persons and autonomous weapons without human control. President Trump announced last week that federal agencies must stop using Anthropic technology, and Defense Secretary Pete Hegseth has labeled Anthropic a “supply chain risk.” An unnamed senior Pentagon official, quoted by Axios, warned of the operational fallout, saying, “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
CBS said a Pentagon official identified only as “Michael,” described as a chief technology officer, told the network that the Defense Department uses Claude for tasks such as synthesizing documents and improving logistics and supply chains, among other duties. The identity and title of that official remain unclear in public reporting.

Anthropic signed a roughly $200 million contract with the Pentagon in July last year, a deal the company highlighted in a press release while describing itself as a leader in “safe and responsible AI” with “strict usage policies.” The role Claude has already assumed inside U.S. systems has complicated efforts to remove it: Defense One, citing multiple defense sources, reported it could take three months or longer for the Pentagon to replace Claude’s capabilities with another platform. The department has recently moved to add alternatives, including a deal to use Elon Musk’s xAI Grok model for some classified systems, and major defense contractors such as Palantir also use Claude in work for the Pentagon.
Operational details remain opaque. The Pentagon has not publicly specified how Claude was deployed in the Iran strikes or in earlier operations, and Israeli military use of the model has not been confirmed; an Israeli Defense Forces spokesperson did not respond to CBS. Analysts warn that rapid integration of commercial AI into warfare creates legal, ethical, and operational risk, even as officials emphasize that adoption is global and that militaries that lag risk strategic disadvantage.
The disclosures underscore a fraught moment where national security, corporate policy, and presidential orders collide over new battlefield technologies, while officials confront the practical difficulty of disentangling an AI system already embedded in classified and unclassified military workflows.
Know something we missed? Have a correction or additional information?
Submit a Tip

