U.S.

Trump orders federal agencies to cut Anthropic ties, Pentagon bars vendors

Trump ordered all federal agencies to stop using Anthropic's AI; Pentagon labeled the company a "supply-chain risk" and threatened Defense Production Act action.

Lisa Park3 min read
Published
Listen to this article0:00 min
Share this article:
Trump orders federal agencies to cut Anthropic ties, Pentagon bars vendors
AI-generated illustration

President Trump ordered "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology," posting the directive on Truth Social late Friday, igniting a government-wide severing of ties with the company and setting up an unprecedented legal and operational standoff.

Defense Secretary Pete Hegseth responded by designating Anthropic a "supply-chain risk to national security" in a post on X, saying: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. America’s warfighters will never be held hostage by the ideological whims of Big Tech." Hegseth also threatened to invoke the Defense Production Act to compel Anthropic to allow the Defense Department unrestricted use of its AI models if the company did not comply.

The Pentagon had given Anthropic a response deadline—reported as 5:01 p.m. on Friday—to accept usage terms allowing the department to employ the company's models "for all lawful purposes." Hegseth's designation followed shortly thereafter; one account noted it arrived 13 minutes after that Pentagon deadline. Officials said the department will permit a transition period of up to six months to wind down critical military uses of Anthropic’s services.

Anthropic CEO Dario Amodei pushed back publicly, writing in a company statement that the firm "cannot in good conscience accede to their request" to allow unrestricted military use. Amodei said Anthropic would rather cut ties with the government than lift the restrictions, setting the stage for an almost certain legal challenge over whether an American AI company can be treated like a foreign supply-chain threat.

The move carries immediate implications beyond defense. Anthropic, maker of the chatbot Claude, had been supplying models and services that private companies and government contractors integrated into analytics, content moderation and automated workflows. The supply-chain risk designation bars any military contractor, supplier or partner from doing business with Anthropic, a constraint that lawyers and industry officials say could ripple through procurement chains and corporate partnerships.

Legal experts called the designation "all but unheard-of" for a U.S. technology firm and warned the precedent could redraw boundaries between national security and private-sector control of advanced software. Anthropic is expected to contest the step in court, arguing the Pentagon’s demands exceed lawful scope and imperil corporate governance and user protections.

Public health systems and community providers reliant on third-party AI tools face added uncertainty. Hospitals and clinics that have piloted or adopted third-party models for scheduling, triage or population health analytics may confront sudden vendor changes, potential service interruptions and higher costs for replacement technologies. Those disruptions often fall hardest on rural hospitals and safety-net clinics with limited technical teams and budgets, amplifying existing health inequities.

Sean Parnell, the Pentagon’s chief spokesperson, posted on X to counter concerns about misuse: "The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." Nonetheless, officials acknowledged the action could "vastly complicate government intelligence analysis and defense work" while industry watchers said the confrontation will accelerate federal debates over "sovereign" AI architectures and vendor safeguards.

The standoff leaves government agencies, defense contractors and health-care providers weighing legal risks against operational needs. In the coming days the Pentagon must clarify whether it will formally trigger the Defense Production Act and provide a definitive timeline for the phase-out, while Anthropic’s next moves in court or in negotiation will determine whether this dispute reshapes how the United States balances security, corporate autonomy and equitable access to AI-powered services.

Know something we missed? Have a correction or additional information?

Submit a Tip
Your Topic
Today's stories
Updated daily by AI

Name any topic. Get daily articles.

You pick the subject, AI does the rest.

Start Now - Free

Ready in 2 minutes

Discussion

More in U.S.