Pentagon delivers final offer to Anthropic for unrestricted military AI use
Pentagon officials sent Anthropic a best-and-final offer today to secure unrestricted military use of its AI models, raising oversight, safety, and equity concerns.

Pentagon officials delivered a best-and-final offer to Anthropic on Feb. 26, 2026, seeking permission for unrestricted military use of the company's AI models, sources said, a move that could rapidly expand the role of generative and decision-assisting systems across U.S. defense operations. The offer signals a near-term push to operationalize commercial AI in intelligence, logistics, and combat support without the limitations some civil society groups and researchers have urged.
The immediate consequence is a potential avenue for the Defense Department to deploy Anthropic’s models in a wider set of applications than previously permitted under restrictive procurement terms, placing a commercial AI firm at the center of a national security strategy that increasingly relies on automated analysis, prediction, and command support. Officials argue such integration accelerates capabilities and reduces dependence on bespoke military systems, but the absence of publicly disclosed restrictions raises questions about accountability, testing standards, and civilian risk mitigation.
Public health experts and community advocates warn that rapid militarization of advanced AI carries spillover harms for civilians and health systems. Autonomous targeting, predictive surveillance, or accelerated intelligence cycles can heighten the risk of misclassification and collateral harm that damages hospitals, disrupts emergency response, and displaces communities already vulnerable to violence. In addition, expanded cyber and information operations powered by sophisticated models may undermine public trust in health messaging and exacerbate misinformation during disease outbreaks, complicating outbreak control and vaccination campaigns.
The offer also intensifies longstanding debates about governance and social equity. Communities of color, low-income neighborhoods, and populations in conflict-affected regions historically bear disproportionate harm from surveillance and force multipliers. Without binding safeguards, model deployment could entrench biased decision-making in targeting, detention prioritization, or border enforcement. Equally, defense contracting often consolidates advanced capabilities within a small set of vendors, skewing economic benefits away from communities that face greater public health and social needs.
Policy experts say the development exposes gaps in civilian oversight and interagency coordination. Current DoD guidelines and traditional export controls were not written for large-language models and multimodal systems that can be repurposed for a wide array of functions. Health agencies and emergency managers have limited formal voice in defense acquisitions, yet their missions intersect when AI-driven operations affect hospitals, supply chains, and population movements. Advocates call for statutory transparency requirements, independent safety audits, red-team testing focused on civilian consequences, and clear prohibitions on autonomous lethal action in procurement contracts.
Congressional scrutiny is likely to follow as lawmakers weigh national security benefits against ethical, legal, and public health risks. For communities on the front lines of surveillance or conflict, the decision will not be abstract: it will shape whether a privately developed model informs life-and-death choices, how errors are remedied, and who bears the costs. The Pentagon’s final offer to Anthropic crystallizes a pivotal choice about the pace and shape of AI adoption in defense, one that will determine not just battlefield capabilities but the safety, dignity, and health of civilians at home and abroad.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

