Technology

Attackers clone OpenClaw agents in first AI agent identity theft

Security researchers reported attackers cloned OpenClaw instances and skills, enabling automated impersonation and unauthorized actions that threaten downstream systems.

Dr. Elena Rodriguez3 min read
Published
Listen to this article0:00 min
Share this article:
Attackers clone OpenClaw agents in first AI agent identity theft
AI-generated illustration

Security researchers and incident responders on February 24, 2026 disclosed what they describe as the first documented campaign of "AI agent identity theft," in which adversaries cloned and weaponized OpenClaw instances and skills to impersonate legitimate personal AI assistants and execute unauthorized actions.

OpenClaw, an open-source, action-capable personal AI assistant used by hobbyists and integrators, was the target of the campaign, researchers said. The attack combined counterfeit agent manifests with repackaged or trojaned skills to create convincing replicas of genuine agents. Those replicas then invoked downstream APIs and automation endpoints with the appearance of legitimate user agents, researchers and responders reported.

Technically, the operation turned two widely used features of agent ecosystems into an exploit pathway: the distribution of third-party skills and the reliance on agent identity and provenance to authorize actions. By altering skill packages and cloning agent metadata, attackers avoided typical user prompts and leveraged preexisting trust relationships between agents and service endpoints. Incident responders who examined the campaign noted patterns consistent with coordinated supply-chain manipulation and automated credential harvesting.

The immediate impact is operational. Organizations and individuals who trusted agent-origin metadata to gate access faced the risk that actions attributed to a recognized agent were in fact driven by a malicious counterfeit. That includes automated file transfers, billing API calls, account resets, and workflow triggers in connected productivity and cloud platforms. Researchers warned that the most acute danger is not noisy break-ins but stealthy abuse of automation pipelines that run with elevated privileges by design.

The campaign also raises severe supply-chain concerns. Open-source skill repositories and community marketplaces, where thousands of small modules and actions are shared, can act as force multipliers: a trojaned skill pushed into one distribution channel can be pulled and run as a trusted component across numerous agents. Because many deployments accept skills without strong cryptographic attestation or provenance checks, attackers can scale impersonation quickly.

Security practitioners assessing exposure are being urged to treat agent identities as a new class of credentials. Recommended defensive steps include rotating service keys and tokens used by agents, enforcing signed manifests and builds for skills, restricting the scopes and privileges that action-capable agents receive, and instituting behavioral monitoring that flags unusual agent-driven sequences. Researchers also called for immediate review of any automation that executes changes across accounts or billing systems without multi-factor human approval.

Beyond technical remedies, the incident exposed gaps in governance for action-capable AI. The ability to program agents with real-world effects means provenance, auditing, and liability must be central to design. Open-source communities, platform providers, and enterprises face pressure to adopt cryptographic attestation standards, curated skill marketplaces with vetting, and clear incident response pathways for agent abuse.

Researchers framed the campaign as a wake-up call: the identity of an AI agent can now be stolen and used to do real-world harm. As agent ecosystems proliferate into personal devices, smart homes, and enterprise automations, defenders must close the gap between machine identity and trustworthy action before impersonation becomes routine.

Sources:

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology