OpenAI and Google Employees Back Anthropic in Pentagon "Supply Chain Risk" Lawsuit
Dozens of AI researchers crossed company lines to defend Anthropic, warning the Pentagon's blacklist threatens U.S. competitiveness and safety discourse.

Dozens of current employees at OpenAI and Google DeepMind filed a legal brief Monday supporting Anthropic's lawsuit against the U.S. Department of Defense, warning that the Pentagon's decision to designate the AI company a "supply chain risk" amounts to improper retaliation that threatens American leadership in artificial intelligence.
The amicus brief, filed just hours after Anthropic sued the Defense Department and other federal agencies, supports Anthropic's request for a temporary restraining order that would allow it to continue working with military contractors while the litigation proceeds. Amicus briefs are filings submitted by parties not directly involved in a case but with relevant expertise.

Reports place the number of signatories between more than 30 and nearly 40, with outlets varying in their counts. Among the most prominent signatories is Jeff Dean, Google DeepMind's chief scientist and Gemini lead. Other named signatories include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak. All signed in a personal capacity and stated they do not represent the views of their employers.
The group described themselves as "engineers, researchers, scientists, and other professionals employed at U.S. frontier artificial intelligence laboratories," and they acknowledged that their companies compete directly with Anthropic. That competitive reality made their intervention all the more striking.
The brief's central argument is that the DoD's designation punishes Anthropic for principled positions on AI safety. Anthropic had refused to allow the Pentagon to use its technology for mass domestic surveillance or the development of autonomous lethal weapons systems, and negotiations with the Pentagon subsequently broke down. The signatories argued that the DoD could have simply canceled its contract with Anthropic rather than branding the company a national security liability.
"If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the brief states, as quoted by Wired.
The filing further argues that Anthropic's concerns about those specific use cases are legitimate. The designation "is improper retaliation that harms the public interest," the brief states, and the concerns behind Anthropic's red lines "are real and require a response." On autonomous weapons, the brief states that "fully autonomous lethal weapons systems present risks that must also be addressed," and on surveillance: "mass domestic surveillance powered by AI poses profound risks to democratic governance, even in responsible hands."
The signatories also raised technical objections, noting that current AI systems remain susceptible to hallucination and that it is too early to fully automate life-and-death decisions.
The legal action comes amid a rapidly shifting landscape for AI companies in their dealings with Washington. Shortly after labeling Anthropic a supply chain risk, the Defense Department signed a separate AI deal with OpenAI, a move that prompted protests from some of that company's own employees. Meanwhile, the White House is reportedly working on a presidential order that would formally ban federal agencies from using Anthropic's AI tools entirely.
OpenAI and Google did not immediately respond to requests for comment. The case, filed in federal court, is proceeding as the technology industry watches closely to see whether the government can use procurement sanctions to effectively discipline AI companies that resist certain military applications.
Know something we missed? Have a correction or additional information?
Submit a Tip

