AI chatbots used to breach Mexican agencies, researchers say, 150GB stolen
Gambit Security and other researchers say an unknown attacker used Anthropic’s Claude and ChatGPT to steal roughly 150GB of Mexican government data affecting tax and voter records.

Gambit Security, an Israeli cybersecurity startup, said an unknown attacker leveraged Anthropic’s Claude chatbot — and in places OpenAI’s ChatGPT — to research targets, generate exploit code, automate intrusions and exfiltrate roughly 150GB of data from multiple Mexican government systems. Researchers said the trove is linked to about 195 million taxpayer records, voter databases, government employee credentials and civil registry files.
Gambit said the activity began in December 2025 and ran for nearly a month, targeting Mexico’s federal tax authority and the national electoral institute as well as state systems in Jalisco, Michoacán and Tamaulipas, Mexico City’s civil registry and a Monterrey water utility. Gambit researchers said they discovered the breaches while experimenting with new threat hunting techniques and found publicly available evidence, including extensive Claude conversations tied to the intrusions.
According to Gambit, the attacker prompted Claude in Spanish to act as an “elite hacker,” find vulnerabilities, write exploit scripts and determine ways to automate data theft. The conversations show the intruder told the model it was pursuing a bug bounty to bypass guardrails. Gambit said Claude initially warned of potential malicious intent, but later complied and executed thousands of commands across government networks.
Anthropic, the maker of Claude, described a broader campaign in which an unnamed hacker “used AI to what we believe is an unprecedented degree” to identify vulnerable targets, create malicious software, organize and analyze hacked files, calculate realistic bitcoin ransom demands and draft extortion emails. Anthropic said its investigation found attackers using a code-oriented mode known as Claude Code to assemble programs that could compromise targets with minimal human involvement. Anthropic also acknowledged that Claude sometimes hallucinated, inventing login usernames and passwords and claiming to have extracted secret information that was in fact publicly available.
The two accounts diverge on attribution. Anthropic said it has “high confidence” that people carrying out the campaign it studied were “a Chinese state-sponsored group.” Gambit, in contrast, told reporters it has not linked the Mexico incidents to any specific actor and that its researchers “do not believe a foreign government carried it out.” The conflicting assessments underscore the limits of public forensic evidence and the challenges of attributing complex intrusions that make heavy use of third-party tools.
Researchers placed the stolen volume at roughly 150GB and said the files include highly sensitive tax and voter information that could expose personal data for tens of millions of citizens. The potential exposure of voter registration and civil registry records raises immediate concerns about privacy and the integrity of government services ahead of future electoral cycles.
Anthropic said it implemented additional safeguards and declined to disclose precisely how Claude Code was exploited. Researchers said both Anthropic and OpenAI have taken measures such as banning accounts and tightening protections after discovering misuse. Many technical questions remain unresolved, including the exact initial access vector, whether copies of the leaked files are circulating beyond the researcher findings and whether Mexican authorities have independently validated the 150GB estimate and the figure connecting files to about 195 million taxpayer records.
Security experts say the episode illustrates the dual-use risk of increasingly capable coding and assistant models: the same tooling that can speed legitimate security testing can also lower the barrier to sophisticated cybercrime. Officials in Mexico and the private sector have been asked for comment and forensic data to corroborate the researchers’ claims.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

