Moltbot open-source AI agent exposes user API keys and chat logs
8 days ago • ai-security
Researchers disclosed critical vulnerabilities in Clawdbot. The project was renamed Moltbot on January 25 after an Anthropic trademark dispute over similarities to Claude.[1][2]
Hundreds of internet-exposed control panels are discoverable via Shodan with the query 'Clawdbot Control.' Many of these panels allow unauthenticated access to API keys, OAuth secrets, bot tokens, full chat histories, and enable command execution with elevated privileges.[4][5]
Misconfigured reverse proxies treat remote connections as localhost. That behavior lets attackers leak credentials, take over accounts, and exfiltrate data through integrations such as Telegram and Slack.[4]
The locally running agent has full system access for file I/O and browser control and is vulnerable to prompt injection. Researcher Matvey Kukuy extracted a private key in five minutes by sending a malicious email that the bot processed.[5] Developers’ FAQ notes the risks of untrusted input and recommends IP whitelisting and sandboxing. The v2026.1.24 release focused on UI and plugins and did not list explicit fixes for these issues.[1]
Why It Matters
- Scan public-facing infrastructure with Shodan for the 'Clawdbot Control' fingerprint and take exposed panels offline or block access.
- Enforce IP whitelisting and network-level restrictions on agent gateways to prevent unauthorized access to control panels.
- Sandbox agents that hold system privileges to limit the blast radius from prompt injection via email or messaging integrations.
- Rotate and audit API keys, OAuth secrets, and bot tokens found on exposed instances to mitigate credential theft and potential RCE.
- Prioritize agent hardening playbooks and monitoring as local AI tools with persistent access become more common in production.
Trust & Verification
Source List (5)
Sources
- moltbot (GitHub Releases)OfficialJan 25, 2026
- ForbesTier-1Jan 27, 2026
- Barron'sTier-1Jan 27, 2026
- Bitdefender (Hot for Security)OtherJan 27, 2026
- CointelegraphOtherJan 27, 2026
Fact Checks (4)
Prompt injection enables private key extraction in 5 minutes (VERIFIED)