State Actors Exploit Google's Gemini to Sharpen Cyber Attacks
Google says state-sponsored actors from China, Iran, Russia and North Korea repeatedly misused its Gemini AI throughout 2025 to refine and accelerate cyber operations, revealing gaps in model-level defenses. The finding, detailed in a new Threat Intelligence Group report, underscores a widening technological arms race with implications for national security, corporate risk management and AI governance.

Google’s Threat Intelligence Group (GTIG) disclosed today that state-sponsored threat actors from China, Iran, Russia and North Korea have been able to misuse its Gemini artificial intelligence during 2025 to support malicious cyber campaigns, despite company efforts to detect and prevent such activity. The GTIG report, titled AI Threat Tracker: Advances in Threat Actor Usage of AI Tools, lays out how adversaries are incorporating large language and multimodal models into their tradecraft at multiple points in an operation.
According to the report, Gemini has been leveraged across many stages of attack campaigns, a pattern that reflects broader shifts in the cyber threat landscape as advanced AI tools become widely available. Google says its security guardrails for Gemini trigger "safety responses" when threat actors request assistance with malicious activities, but the company’s findings show that those protections are not a panacea. Adversaries are adapting tactics, using obfuscation, fragmented queries, testing benign inputs to probe model behavior, or combining AI outputs with human operators to evade detection.
The GTIG analysis does not tie misuse to a single technique, but it highlights a worrying trend: models originally designed to increase productivity are being repurposed to accelerate reconnaissance, craft more persuasive social-engineering content, refine malware code and scale targeted deception. That shift reduces the time and expertise required to stage sophisticated intrusions, amplifying risks to critical infrastructure and private-sector networks alike.
Google framed the disclosure as part of a broader effort to document how threat actors adopt emerging technologies, but the revelation also raises questions about the sufficiency of current defenses. Model providers have traditionally relied on content filters, usage monitoring and API controls to block illicit requests, but the GTIG report suggests those measures can be circumvented by determined, well-resourced operators. The involvement of nation-state actors further complicates mitigation, given their access to technical talent, funding and operational patience.
Cybersecurity experts say the findings should prompt a recalibration of both corporate and government responses. Providers of powerful models will need to expand threat-detection tooling, invest in red-teaming that simulates state-level adversaries, and strengthen telemetry sharing with incident-response partners. Governments must weigh regulatory and cooperative options that balance innovation with national security, including standards for model governance, mandatory breach reporting, and international norms to limit offensive use.
The GTIG disclosure also spotlights a broader ethical and policy question: how to preserve the societal benefits of generative AI while constraining its misuse. As models grow more capable, the race between defenders and attackers is likely to accelerate. The report serves as a reminder that technological stewardship will require sustained collaboration across industry, academia and government to outpace adversaries who are already incorporating AI into their playbooks.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

