Technology

Anthropic refuses DoD demand, resists removal of AI safeguards

Anthropic says it will not drop safeguards barring mass surveillance and autonomous weapons despite Pentagon threats to label it a security risk and cut off contracts.

Dr. Elena Rodriguez3 min read
Published
Listen to this article0:00 min
Share this article:
Anthropic refuses DoD demand, resists removal of AI safeguards
Source: media.es.wired.com

Anthropic posted a blunt statement March 5 saying it will not strip two narrow safeguards from its Claude models even after the Department of Defense pressured the company to accept an “any lawful use” clause and threatened punitive steps that could sever its ties to the U.S. government. The dispute centers on what Anthropic says are protections against mass domestic surveillance and fully autonomous weapons, and on whether the Pentagon can force their removal.

Anthropic’s post, titled “Where things stand with the Department of War,” said the company understands that the Department of War, not private companies, makes military decisions and that it “has never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.” It also warned that the Department had told companies it would only contract with vendors that accede to “any lawful use,” and that officials had “threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal.”

Anthropic said those threats are contradictory and that it “cannot in good conscience accede to their request.” The company offered to continue serving the Department with the safeguards in place and pledged to “work to enable a smooth transition to another provider” if offboarded, keeping its models available “on the expansive terms we have proposed for as long as required.”

The fight unfolded amid tense, last-minute negotiations around a roughly $200 million AI contract being brokered inside the Pentagon. Emil Michael, described by officials as the Defense Department’s chief technology officer, had been negotiating a deal with Anthropic while also pursuing a parallel framework with a rival provider. Minutes before a 5:01 p.m. Friday deadline Michael was reportedly “fuming,” and at 5:14 p.m. Defense Secretary Pete Hegseth posted that Anthropic had been designated a security risk and would be cut off from working with the U.S. government. “America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth wrote.

AI-generated illustration
AI-generated illustration

Legal and commercial questions now hang over the dispute. Anthropic cited 10 U.S.C. 3252 as the statute the Pentagon would use to apply a supply-chain-risk label, but federal contract experts say its reach is unclear when it comes to how DoD contractors use third-party software. Alex Major, a partner at McCarter & English who advises technology companies, said the announcement “is not mired in any law we can divine right now.” Industry lawyers warn that a legal battle could take months or years and damage business relationships in the interim.

The standoff has already put partners and customers in a holding pattern. Companies that work closely with Anthropic, including major cloud and chip vendors, are reportedly assessing exposure and awaiting formal DoD guidance beyond public statements. Anthropic’s post also said the company had shut down Chinese Communist Party-sponsored cyberattacks that attempted to abuse Claude and that it has advocated for strong export controls on chips to preserve a democratic advantage.

The Pentagon declined to comment, and a Defense Department representative could not be reached for additional response. For now, the dispute exposes a central tension in U.S. AI policy: how to reconcile military demand for powerful capabilities with company-imposed guardrails on civil liberties and autonomous force, even as the legal tools the government might use to compel compliance remain contested.

Know something we missed? Have a correction or additional information?

Submit a Tip
Your Topic
Today's stories
Updated daily by AI

Name any topic. Get daily articles.

You pick the subject, AI does the rest.

Start Now - Free

Ready in 2 minutes

Discussion

More in Technology