Politics

Judge Labels Pentagon's Moves Against AI Firm Anthropic Troubling

A federal judge said the Pentagon's blacklisting of Anthropic "looks like an attempt to cripple" the AI firm, warning the three actions taken against it don't appear tailored to any real security concern.

Maria Santos4 min read
Published
Listen to this article0:00 min
Share this article:
Judge Labels Pentagon's Moves Against AI Firm Anthropic Troubling
Source: cdn.sanity.io
This article contains affiliate links, marked with a blue dot. We may earn a small commission at no extra cost to you.

I don't know if it's murder, but it looks like an attempt to cripple Anthropic," U.S. District Judge Rita Lin said from the bench in San Francisco Tuesday, delivering her sharpest assessment yet of a White House campaign that has rapidly severed one of the government's closest AI partnerships.

Lin focused specifically on three Trump administration actions: President Trump's ban on Anthropic, Defense Secretary Pete Hegseth's requirement that Pentagon contractors cut commercial ties with the company, and its designation as a supply chain risk. "What is troubling to me about these three actions is that they don't really seem to be tailored to the stated national security concern," she said. "If the worry is about the integrity of the operational chain of command, [the Pentagon] could just stop using Claude."

Anthropic sued earlier this month to stop the Trump administration from enforcing what the company calls an "unlawful campaign of retaliation" over its refusal to allow unrestricted military use of its technology. Before the conflict erupted in late February, Anthropic was one of the first AI companies to partner with many federal agencies as the government sought to rapidly upgrade its systems, and had signed a $200 million contract with the Pentagon in July.

Lin had sent both sides a number of questions ahead of Tuesday's hearing, including about discrepancies between Hegseth's formal directive declaring Anthropic a potential threat to national security and what he posted about it on social media. Hegseth and other high-ranking officials publicly insisted the company must accept "all lawful" uses of Claude, threatened punishment if Anthropic did not comply, and condemned the firm and its CEO Dario Amodei on social media. When Amodei refused to bend, Trump announced on Feb. 27 he was immediately ordering all federal employees to stop using Anthropic, calling it a "radical left, woke company" that was putting troops at risk.

The Pentagon's lawyer argued that the social media posts are not legally binding; Lin said she found that argument "pretty surprising ... obviously the statement is front and center in this lawsuit."

The government's court filings paint Anthropic as a potential saboteur of wartime systems. The Trump administration filing warned that "AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing war-fighting operations, if Anthropic — in its discretion — feels that its corporate 'red lines' are being crossed." Included in the administration's court filings is a March 2 memorandum from U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer and a former Uber executive, outlining the rationale for labeling Anthropic's products as risky. Michael also filed two sworn declarations with the court, one shortly before the hearing.

AI-generated illustration
AI-generated illustration

The Pentagon argues in court filings that Anthropic has asked for an "operational veto" of the Pentagon's decision-making and that Anthropic has full control over Claude's availability and performance, which department officials say would be inappropriate and dangerous in sensitive operations. Anthropic denies it has operational control over the model once deployed in classified settings.

The supply chain risk designation, imposed earlier in March, means that use of Anthropic's technology purportedly threatens U.S. national security, and if allowed to continue, will require defense contractors including Amazon, Microsoft, and Palantir to certify that they do not use Claude in their work with the military. Without an injunction, Anthropic has said it could lose billions of dollars in business.

Microsoft, itself a major government contractor subject to the certification requirement, filed a brief siding with Anthropic. Anthropic's lawyer Michael Mongan told the court: "This is something that has never been done with respect to an American company. It is a very narrow authority. It doesn't apply here, and it's not a normal way to respond to the concerns that have been articulated by the other side." Microsoft echoed that concern, arguing that the Pentagon's action "forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a U.S. company."

Anthropic's lawsuit alleges the government actions violated the First Amendment and due process laws. "Put simply, the Executive Branch is leveraging its powers to punish a major American company for the sin of expressing its views on a matter of profound public significance," it said in a legal filing.

Lin said she expects to issue an order on Anthropic's motion within the next few days. If the preliminary injunction is awarded, the AI startup would be able to continue doing business with government contractors and federal agencies as its lawsuit plays out in court; without it, the company has said in filings it could lose billions of dollars in business and suffer further reputational harm. Anthropic asked for a decision by March 26, though the court is not bound by that date.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Politics