Anthropic says 24,000 fake accounts generated 16 million Claude exchanges
Anthropic alleges DeepSeek, Moonshot and MiniMax used about 24,000 fraudulent accounts to extract over 16 million conversations from Claude, raising security and export-control concerns.

Anthropic said in a blog post that three China-based AI developers — DeepSeek, Moonshot and MiniMax — used roughly 24,000 fraudulent accounts to generate more than 16 million conversations with its Claude chatbot, a volume the company says was intended to train competing models through a technique called distillation.
Anthropic described the activity as industrial-scale capability extraction. "The three distillation campaigns ... followed a similar playbook, using fraudulent accounts and proxy services to access Claude at scale while evading detection," the company wrote, according to its public statement. Anthropic said the campaigns targeted what it calls Claude’s most differentiated capabilities, including agentic reasoning, tool use and coding, and that the traffic patterns and metadata point to deliberate, systematic harvesting rather than ordinary user behavior.
The company said the operations relied on commercial proxy services that resell access to frontier models and on sprawling networks of fraudulent accounts. Anthropic described such arrangements as "hydra clusters," and said in at least one instance a single proxy network managed more than 20,000 fraudulent accounts simultaneously. The company also reported tracing request metadata to staffers at DeepSeek and Moonshot, a claim that has not been independently verified.
Anthropic framed the activity as a national security and safety risk. "Illicitly distilled models lack necessary safeguards, creating significant national security risks," the company warned, listing potential misuses such as offensive cyber operations, disinformation campaigns and mass surveillance. It urged coordinated action by industry and policymakers, writing that "these campaigns are growing in intensity and sophistication" and that "the window to act is narrow."
The accusations arrive amid an ongoing debate in Washington about limiting advanced chip exports to China. Anthropic argued the scale of the alleged extraction "requires access to advanced chips" and said such distillation attacks reinforce the rationale for export controls that would limit both direct training and the scale of illicit cloning. Security commentator Dmitri Alperovitch said the episode underlines those concerns: "It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact," he said, adding that implicated companies should be refused advanced chips.

Anthropic said it has shared its findings with relevant U.S. government entities and industry partners. An Anthropic official, Klein, suggested publicly naming the labs could prompt "thoughtful government action" or at least engagement with the companies involved. The company also reiterated that its terms of service prohibit surreptitious harvesting for distillation and that it does not permit use of its technology in China.
OpenAI has made similar allegations against DeepSeek in the past, including a Feb. 12 memo to a congressional committee alleging systematic intellectual property theft via distillation. Attempts to reach DeepSeek, Moonshot and MiniMax for comment were not immediately successful.
The claims rest on Anthropic’s internal forensics; the company has not published a full technical annex for independent review. Critics on social media pointed to Anthropic’s controversial history of scraping internet text, including allegations about use of copyrighted books in model training, arguing the company should address its own practices even as it presses for stronger controls on competitors. Independent verification and responses from the accused firms or any government inquiry will be central to assessing the scope and implications of the accusations.
Know something we missed? Have a correction or additional information?
Submit a Tip

