U.S.

Pentagon's AI War Machine Addiction, From Project Maven to Autonomous Strikes

The Pentagon's AI addiction, sparked by a 2017 imagery project, is now being stress-tested in active combat, raising alarms about accountability, escalation, and who controls the kill chain.

Lisa Park5 min read
Published
Listen to this article0:00 min
Share this article:
Pentagon's AI War Machine Addiction, From Project Maven to Autonomous Strikes
AI-generated illustration

The words circulating among some Pentagon insiders capture the mood precisely: "God, it's terrifying." That line, surfaced in Bloomberg Businessweek's deep-dive package on the U.S. military's AI dependency, reflects not just fear of the technology but of the institutional momentum behind it. What began nearly a decade ago as a modest experiment in video analysis has metastasized into a strategic reliance on commercial AI that is now being tested not in laboratories or war games, but in live combat.

From a Windowless Pentagon Room to a National Strategy

Project Maven was established in a memo by the U.S. Deputy Secretary of Defense on April 26, 2017. Its original mandate was narrow: the Pentagon created the Algorithmic Warfare Cross-Functional Team, known as Project Maven, to use machine learning to automate the analysis of video and imagery for military use. The pitch to Silicon Valley engineers was framed carefully. Jack Shanahan, the program's early leader, framed it as human-in-the-loop decision support inside the Department of Defense rather than as an autonomous weapons platform.

That framing helped open doors. Bloomberg's reporting documents how Maven normalized defense-tech partnerships and created contracting and hiring pathways for private AI firms that had previously kept their distance from Pentagon work. The precedent set in those early years proved durable: once the technical and institutional infrastructure for commercial-military AI collaboration existed, broadening its scope became a matter of procurement rather than philosophy.

The Iran Stress Test

The clearest evidence of how far the Pentagon's AI ambitions have traveled comes from the conflict with Iran. During the first day of U.S. military operations against Iran in early 2026, the Maven Smart System reportedly generated more than 1,000 strike options, enabling coordinated strikes against approximately 900 targets within a 12-hour window. That figure alone illustrates the qualitative shift from Maven's origins as an imagery-tagging tool to its current role as a targeting engine operating at machine speed.

Bloomberg's mid-March 2026 reporting documented how the Iran conflict has become a live stress test for the Pentagon's broader AI strategy, showing how experimental systems and contractor partnerships are being evaluated in active operations rather than controlled settings. The gap between prototype and deployment, once measured in years of review cycles, has compressed into something closer to real time.

The Dependency Machine

The scale of institutional commitment is visible in the budget numbers. The Pentagon requested $13.4 billion for AI-enabled systems for 2026 alone. Military spending also includes as much as $9 billion on data centers and computing capabilities customized for its security needs. These figures reflect not a pilot program but an embedded dependency: a procurement infrastructure so large that reversing or even significantly redirecting it would require political will that has rarely materialized in defense spending cycles.

Bloomberg's Businessweek reporting argues that repeated procurement cycles and wartime experimentation have deepened the military's reliance on commercial AI capabilities even as many programs face technical, ethical, and logistical challenges. Missed deadlines, false starts, and cost overruns, familiar features of large defense programs, have not slowed the acquisition momentum; they have been absorbed into it. The rapid evolution of commercial AI has added a new variable: systems that outpace the institutional processes designed to govern them.

The industrial consequences extend beyond the Pentagon's balance sheet. Major contracts and procurement signals accelerate investment toward specific AI architectures and companies, creating vendor dependencies that are difficult to unwind. Contractors supporting Maven were among the first to benefit from this dynamic, and the pattern has since repeated across dozens of successor programs. Critics warn that this market concentration could crowd out alternative approaches and leave the military strategically exposed if preferred vendors face technical failures, legal challenges, or political pressure.

Speed, Accountability, and the Laws of War

The operational advantages of AI-accelerated targeting come bundled with accountability questions that existing legal frameworks were not built to answer. International humanitarian law places the ultimate responsibility for the use of force on human commanders. But as the Maven Smart System's Iran performance demonstrates, when a machine generates over a thousand strike recommendations in hours, the notion of meaningful human review at each decision point becomes difficult to sustain in practice.

The 2013 U.S. Department of Defense Directive 3000.09, concerning autonomous weapons systems, mandates a review process for any system that could potentially operate without human control, outlining the need for appropriate levels of human judgment. Whether that directive's language maps onto systems that technically keep a human in the loop while compressing deliberation windows to seconds is a legal question without a settled answer.

As AI systems increasingly influence operational decisions, including decisions concerning the use of force, adjustments will be required in international law, rules of engagement, and accountability mechanisms. At the international level, the UN General Assembly's First Committee passed a third consecutive resolution on lethal autonomous weapons systems in November 2025, reflecting growing multilateral concern that policy is running far behind deployment.

Governance Under Pressure

Bloomberg's Businessweek reporting documents rising scrutiny from some defense officials and lawmakers calling for stronger governance and clearer rules around autonomy and human control in weapons systems. Pentagon acquisition offices face competing pressures: accelerate delivery to maintain operational advantage while simultaneously responding to congressional and public demands for meaningful oversight. The reporting suggests this debate has moved from academic forums into the halls of power precisely because programs are now being trialed in real combat conditions, making the stakes concrete rather than hypothetical.

From Iran to Venezuela, the U.S. military is rolling out an "AI-first" approach to warfare, even as disputes between the Defense Department and its technology partners over acceptable use cases remain unresolved. That unresolved tension, between urgency and governance, between capability and accountability, is the central drama Bloomberg's package captures. The next phase will require not only technical fixes but policy frameworks that define the roles and hard limits of AI on the battlefield. The decisions made in acquisition offices and congressional hearing rooms over the coming years will shape the defense-tech market, international norms, and the practical meaning of human control in warfare for a generation.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in U.S.