Technology

Google Says Hackers Used AI to Find and Exploit Zero-Day Flaw

Google said hackers used AI to uncover a zero-day flaw and prepare a mass attack, a sign that cybercrime is moving from scam-writing to vulnerability hunting.

Lisa Parkwritten with AI··2 min read
Published
Listen to this article0:00 min
Share this article:
Google Says Hackers Used AI to Find and Exploit Zero-Day Flaw
Source: usnews.com

Hackers have crossed a new line in the cybersecurity race, Google said, after detecting an attacker using artificial intelligence to uncover a previously unknown software flaw and prepare it for large-scale abuse. The company said the exploit targeted a widely used open-source system administration tool and was intended for a mass exploitation event, but may have been stopped before it could spread.

Google’s Threat Intelligence Group said this was the first zero-day exploit it believes was developed with AI. That makes the finding more than a one-off breach warning. It suggests AI is no longer only helping criminals draft phishing emails or summarize stolen data. It is now being used earlier in the attack chain, in the search for weaknesses and the weaponization of those flaws.

AI-generated illustration
AI-generated illustration

The company said the operation reflects a “maturing transition” from early AI-enabled activity to industrial-scale use of generative models in hostile workflows. John Hultquist, Google Threat Intelligence Group’s chief analyst, said the discovery may be only “the tip of the iceberg.” Google also said it alerted the affected company before the vulnerability was patched.

The stakes are especially high for businesses, hospitals, schools and government agencies that still struggle to close known security gaps. If attackers can use AI to identify and package new flaws faster, the pressure on understaffed security teams will only grow. Google said the same trend is visible among state-backed hackers, including groups linked to China, North Korea and suspected Russia-nexus operations.

Related photo
Source: securityweek.com

The company has been watching that shift build for months. In February, Google said threat actors were already using AI to gather information, create highly realistic phishing scams and develop malware. In November, it said it had found PROMPTFLUX and PROMPTSTEAL, the first known malware families to use large language models during execution. Its May 11 report said newer AI-enabled malware, including PROMPTSPY, can generate commands dynamically based on system state.

Google — Wikimedia Commons
The Pancake of Heaven! via Wikimedia Commons (CC BY-SA 4.0)

Google said it is responding with Gemini safeguards, including classifiers, in-model protections and disabling malicious accounts. It is also using defender-side AI tools such as Big Sleep and CodeMender to find and fix vulnerabilities. The broader message is stark: the fight is moving from automated spam to AI-assisted discovery of the software flaws that can crack open entire networks, and that shift will favor the side that patches fastest.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology