Imagine your personal AI assistant, the one that knows your habits, preferences, and even your secrets, falling into the wrong hands. That's the chilling reality cybersecurity researchers have uncovered with a new infostealer targeting OpenClaw, the popular open-source AI agent platform. This isn't just about stolen passwords anymore; it's about hijacking the very essence of your digital self.
Researchers at Hudson Rock have identified a case where an infostealer, likely a variant of the notorious Vidar malware, successfully exfiltrated sensitive OpenClaw configuration files and gateway tokens from a victim's system. This marks a significant evolution in infostealer tactics, shifting from targeting browser credentials to stealing the 'soul' of personal AI agents.
But here's where it gets controversial: the theft wasn't achieved through a specialized OpenClaw module within the malware. Instead, it relied on a generic file-grabbing routine, highlighting the vulnerability of seemingly innocuous files containing critical information. The stolen files included:
- openclaw.json: This file holds the gateway token, essentially the key to remotely accessing the victim's OpenClaw instance, along with their email address and workspace path.
- device.json: Containing cryptographic keys, this file enables secure pairing and signing operations within the OpenClaw ecosystem, potentially allowing attackers to impersonate the victim's AI agent.
- soul.md: This file is the heart of the AI agent, outlining its core principles, behavior, and ethical boundaries. Imagine someone not only stealing your AI assistant but also manipulating its very essence.
And this is the part most people miss: the stolen gateway token can grant attackers remote access to the victim's OpenClaw instance if the port is exposed. This means they could potentially control the AI agent, send messages, access sensitive data, or even use it for malicious purposes.
"While the malware was likely searching for standard 'secrets,' it inadvertently hit the jackpot by capturing the entire operational context of the user's AI assistant," Hudson Rock explains. As AI agents like OpenClaw become increasingly integrated into our lives, we can expect dedicated infostealer modules specifically designed to target these files, just as they do for Chrome or Telegram today.
This discovery comes amidst growing security concerns surrounding OpenClaw. The platform's maintainers have partnered with VirusTotal to scan for malicious skills uploaded to ClawHub, establish a threat model, and implement auditing capabilities for potential misconfigurations. However, the recent discovery of a malicious ClawHub skills campaign bypassing VirusTotal scanning by hosting malware on fake OpenClaw websites demonstrates the cat-and-mouse game between attackers and defenders.
Another alarming issue highlighted by OX Security involves Moltbook, a Reddit-like forum for AI agents. Once an AI agent account is created on Moltbook, it cannot be deleted, raising serious privacy concerns for users who wish to remove their data.
Furthermore, SecurityScorecard's STRIKE Threat Intelligence team has identified hundreds of thousands of exposed OpenClaw instances, potentially leaving users vulnerable to remote code execution (RCE) attacks. RCE vulnerabilities allow attackers to execute arbitrary code on the underlying system, turning a single exposed service into a gateway for broader system compromise, especially when OpenClaw has access to email, APIs, or cloud services.
OpenClaw's rapid rise in popularity, with over 200,000 GitHub stars and the recent acquisition of its founder by OpenAI, underscores the platform's potential. However, this popularity also makes it a prime target for malicious actors.
As AI becomes increasingly intertwined with our lives, the stakes of data breaches and security vulnerabilities grow exponentially. The OpenClaw case serves as a stark reminder that protecting our digital selves requires constant vigilance and robust security measures.
What do you think? Are we doing enough to secure our AI-powered future? Share your thoughts in the comments below.