Researchers uncover 21,000 exposed OpenClaw AI instances online
Researchers found more than 21,000 publicly reachable OpenClaw agents, risking leaked keys and full system access if misconfigured.

Internet scans have uncovered a vast number of publicly reachable instances of OpenClaw, an open-source agentic AI platform that can execute actions on users’ behalf. Censys identified 21,639 instances whose web interfaces matched HTML titles such as "Moltbot Control" and "clawdbot Control," a count current as of January 31, 2026. The scale and visibility of those endpoints create a systemic risk even when many require authentication tokens.
The discovery combined large-scale scanning with manual checks. Investigators used HTML-title fingerprints and queries of default port 18789 to find exposed control panels. "Censys has identified more than 21,000 publicly exposed instances as of 31 January 2026," said Silas Cutler, a principal security researcher at Censys. Offensive researcher Jamieson O’Reilly of Dvuln reported that a simple Shodan search for "Clawdbot Control" returned hundreds of results within seconds, and that of the instances he inspected manually eight were completely open with no authentication, allowing full command and configuration access.
Security analysts documented a range of sensitive data left in plain view on some deployments. Researchers found plaintext API keys and credentials, including Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, signing secrets and conversation histories. Cisco security researchers warned that "OpenClaw has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints." Those exposures raise the prospect that attackers could extract secrets, read local files and messages, and instruct agents to take actions on compromised hosts.
OpenClaw’s design expands the potential blast radius. The platform integrates with more than 50 services and messaging systems such as WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage and Microsoft Teams. JFrog researchers Natan Nehorai and Ofri Ouzan said the security mindset applied to many deployments is insufficient. "In security, we never assume perfection. We assume zero-trust, and we design controls to limit the blast radius. That mindset is missing from many OpenClaw deployments today," they wrote in a Feb. 2 technical analysis.
Attackers can exploit multiple vectors. Prompt-injection attacks are especially concerning because OpenClaw agents routinely ingest web content, emails and documents; the project documentation cautions that "the sender is not the only threat surface; the content itself can carry adversarial instructions." Other risks include token brute-force, unsecured endpoints, supply-chain threats from malicious skills or extensions, and remote access tools disguised as plugins. Security vendors flagged a malicious Visual Studio Code extension named "ClawdBot Agent" on Jan. 27 that behaved like a remote access Trojan.

Geographic and infrastructure patterns are uneven but notable. Visible concentrations of instances appeared largest in the United States, followed by China and Singapore, and about 30 percent of detected instances ran on Alibaba Cloud, though analysts cautioned that visibility bias and regional network architecture may skew those figures.
Developers and community maintainers responded quickly after the findings surfaced, rolling out additional security recommendations and configuration guidance. Analysts urged immediate steps: do not expose control endpoints to the public internet without network restrictions and strong authentication, rotate any compromised keys, audit configurations for plaintext secrets, vet third-party extensions, and adopt zero-trust controls to limit potential damage.
The episode underscores a broader tension in the rush to deploy agentic AI: rapid adoption can outpace secure deployment practices, creating an unmanaged attack surface that can expose sensitive data and systems unless operators apply rigorous controls.
Know something we missed? Have a correction or additional information?
Submit a Tip

%2Fdq%2Fmedia%2Fmedia_files%2F2025%2F09%2F21%2Fgoogle-gemini-nano-banana-2025-09-21-21-55-51.jpg&w=1920&q=75)