U.S., China and Russia race to weaponize artificial intelligence in war
AI is already inside targeting, drones and command systems while rules lag badly. The U.S., China and Russia are racing to exploit it before guardrails harden.

The race is already underway
The sharpest competition in modern warfare is no longer about a single “killer robot” moment. It is about who can fuse artificial intelligence into targeting, drone swarms, command-and-control, cyber operations and battlefield decision-making faster than a rival can adapt. That shift has drawn comparisons to the nuclear age, but the policy gap is wider: governments are deploying systems before they have agreed on clear limits for what autonomous or AI-assisted weapons are allowed to do.
The United Nations General Assembly has warned that military AI can intensify arms races, miscalculation, escalation and proliferation to non-state actors. The International Committee of the Red Cross has gone further, saying the deployment of increasingly autonomous weapon systems is already a fact of contemporary conflicts. That makes the central problem less theoretical than institutional: the technology is moving faster than the rules, and the rules are still fragmented.
A legal framework that has not caught up
Washington’s own policy reflects that uncertainty. The U.S. Department of Defense updated Directive 3000.09 on autonomy in weapon systems in January 2023, but Congress’s March 26, 2026 primer on lethal autonomous weapon systems says there is still no agreed international definition of such systems. That definitional gap matters because it leaves states arguing over where autonomy begins, where human control ends and which systems should be restricted before they spread.
The United Nations has also tied military AI to humanitarian, legal, security, technological and ethical concerns, including the possibility that machine-driven decision-making could lower the threshold for war. In practice, that means states can pursue “defensive” efficiency gains while still expanding the speed and scale of violence. The result is a policy vacuum: clear enough to show the risks, too unsettled to contain them.
China’s push toward intelligentized warfare
China has been unusually explicit about where it wants to go. Its 2019 national defense white paper, National Defense in the New Era, linked artificial intelligence, quantum technology, big data, cloud computing, autonomous systems and the internet of things to a shift toward a new “intelligentized” era of warfare. A 2024 Army War College review said that white paper underscores Beijing’s effort to converge on intelligentized warfare, not merely modernize legacy forces.
Recent reporting suggests Beijing is making selective bets rather than assuming it can match the U.S. quickly across the board. Defense News reported on April 7, 2026 that a Chinese institution supervised about 200 autonomous vehicles at once in a drone-swarm test, a sign of serious experimentation with mass coordination rather than single-platform marvels. That matters because swarming systems can complicate air defenses, overwhelm sensors and create new escalation risks if commanders cannot reliably tell training from attack.
The likely contest zones are not abstract. The Taiwan Strait and South China Sea already sit at the center of military planning, and AI promises to compress the time available for decisions in both places. If Beijing believes it can gain an edge in sensing, targeting and electronic warfare before a crisis peaks, the pressure to use those tools first could become as destabilizing as the tools themselves.
The U.S. is already using AI in the kill chain
The United States is not waiting for a future battlefield. CBS News reported on March 18, 2026 that AI systems are helping U.S. forces process roughly 1,000 potential targets a day in some operations. Retired Navy Adm. Mark Montgomery said the turnaround time for the next strike could be under four hours, a pace that turns artificial intelligence into a force multiplier not just for analysis but for the tempo of war.

That speed helps commanders sort documents, imagery and large data sets, build targeting packages, assign strike assets and assess damage after an attack. But the same efficiency raises a hard governance question: if software is driving the pace of target selection, how much human review is enough, and who carries responsibility when the machine narrows the options too aggressively? The practical effect is that AI is moving from back-office support into the operational core of U.S. military power.
Russia’s wartime adaptation
Russia’s role in this race is different, but no less consequential. Analysts say Moscow has been reshaping its command-and-control architecture under wartime pressure, a process that has accelerated through the Ukrainian battlefield. Russia’s military adaptation is not about matching the U.S. or China platform for platform; it is about finding ways to survive under constant pressure, preserve command coherence and make better use of limited information.
That makes AI relevant in the most immediate battlefield sense: battlefield perception, cyber operations, target prioritization and decision-support software can help Russian commanders process incoming data faster than human staffs alone. In a war environment, even modest gains in command speed can change the rhythm of strikes, air defense and counter-drone operations. The danger is that wartime improvisation can also normalize more autonomous tools before legal or operational safeguards are in place.
Why the escalation risk is so high
The most serious fear is not a science-fiction uprising. It is a chain reaction of shorter decision cycles, imperfect data and machines that reward speed over caution. If AI systems are embedded in early warning, targeting or battlefield management, they can amplify false positives, encourage preemption and make it harder for leaders to step back once systems are already in motion.
That is why the competition is so closely tied to the risk of escalation in places like the Taiwan Strait, the South China Sea and the Ukrainian battlefield. The same software that helps commanders digest information faster can also make them more willing to launch before they fully understand what they are seeing. In a crisis, that is the difference between deterrence and a runaway exchange.
The defense-industry race behind the weapons race
The contest is also reshaping defense procurement. NATO’s supreme allied commander transformation, Adm. Pierre Vandier, called the current competition in Western defense a “Darwinian moment,” a blunt assessment of how quickly fast-moving startups are challenging established contractors. POLITICO reported on April 6, 2026 that Anduril won a 10-year U.S. Army contract that could be worth up to $20 billion, while Germany backed major drone deals for newer firms such as Helsing and Stark.
That industrial shift matters because the next phase of the arms race will be decided not only by national strategy but by procurement speed, software updates and the ability to field systems at scale. The countries that can turn algorithms into deployable military capability fastest will shape the balance of power. What remains missing is a durable global framework for keeping those systems under human control before they become too embedded in war to pull back.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip
