Anthropic Claude Code Leak Spawns Malicious Forks With Credential-Stealing Malware
A source-map error in Anthropic's npm package exposed 500,000 lines of Claude Code source, which attackers quickly weaponized into credential-stealing forks targeting developer environments.

A single packaging mistake turned one of AI development's most popular coding tools into a supply-chain attack vector. Anthropic's @anthropic-ai/claude-code npm package contained a source-map file that allowed security researchers to reconstruct more than 500,000 lines of unobfuscated TypeScript, the full internal codebase of Claude Code, triggering a wave of clones, mirrors, and forks across GitHub and other code repositories.
The disclosure, which Anthropic acknowledged as human error, set off a second and more dangerous chain of events. Within hours of the initial exposure in late March, third parties began modifying mirrored copies of the repository. Security teams found that several of those forks, some of which accumulated significant stars and secondary forks from the open-source community before removal, contained install scripts and small binaries designed to exfiltrate SSH keys, saved passwords, and browser cookies. Persistence mechanisms targeting Unix-like hosts were also observed.
Threat intelligence analysts noted the malware was deliberately engineered to evade detection. Attackers used legitimate system calls, small compressed payloads, and HTTPS exfiltration to blend malicious traffic with routine developer network activity. Both file-based and fileless techniques were documented across different poisoned forks.
Anthropic and platform hosts moved to remove the offending packages and repositories, and DMCA takedown notices were issued. Mirrored copies had already spread, however, including into private enterprise environments where remediation is significantly harder to verify.
Beyond the immediate malware risk, reverse engineers reported that the exposed TypeScript revealed sensitive architectural details of Claude Code itself. Among the disclosed internals: a "KAIROS" persistent agent designed for background tasks, a "proactive" or "dream" mode enabling autonomous operation, a multi-agent orchestration layer capable of spawning sub-agents for complex workflows, and an "Undercover Mode" built to allow the agent to commit to open-source repositories without surfacing internal strings. Security researchers say knowledge of these features opens new attack surfaces; a persistent agent like KAIROS, once understood by adversaries, could be manipulated into executing malicious operations and creates new opportunities for model fingerprinting and distillation attacks.
The incident landed at a particularly sensitive moment for enterprise AI adoption. Many organizations have been integrating AI coding agents directly into CI/CD pipelines, and the episode illustrated how AI tooling now travels through the same software supply chains as traditional packages. A single npm packaging error, in that context, can simultaneously expose proprietary intellectual property and seed commodity malware into developer environments worldwide.
Cybersecurity vendors issued immediate guidance to not build or run code from unverified forks, to scan development environments against published indicators of compromise, and to enforce supply-chain controls including pinned dependencies, reproducible builds, and blocked execution of unaudited install scripts. Several threat intelligence groups published command-and-control domains and IP addresses tied to the malicious installers.
For any organization that ran Claude Code or pulled from public forks during the exposure window, security teams are urging immediate credential revocation, SSH key rotation, full scans of developer hosts and CI runners, and rebuilds exclusively from trusted upstream packages following independent vetting. The event has intensified calls for mandatory npm release signing, stricter package-registry hygiene, and default protections against source-map leaks in modern bundlers, debates that are now accelerating as regulators and enterprise customers weigh whether to demand stricter contractual security guarantees from AI platform providers.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

