OpenClaw Security Risks: Installing the AI Agent Requires Extreme Caution
AI News

OpenClaw Security Risks: Installing the AI Agent Requires Extreme Caution

5 min
2/9/2026
OpenClawAI SecurityCybersecurityArtificial Intelligence

OpenClaw's Rapid Rise Meets a Security Reckoning

OpenClaw, the open-source AI assistant that has taken the developer world by storm, is facing intense scrutiny over fundamental security shortcomings. Originally launched in November 2025 and rebranded twice, the project surpassed 150,000 GitHub stars by late January 2026, fueled by its promise of an agent that "actually does things" like manage calendars and automate tasks. However, security analyses from multiple firms reveal a tool where security was deprioritized for usability, creating what Gartner has bluntly called "an unacceptable cybersecurity liability."

The core appeal—and danger—of OpenClaw lies in its architecture. It runs locally, interacts via popular messaging apps, and can be granted extensive permissions to read/write files, execute scripts, and run shell commands. A community-driven "skills" marketplace, ClawHub, allows for endless extensibility. Yet, this very openness is the source of its greatest vulnerabilities.

Architectural Insecurities and the Illusion of Safety

"What makes OpenClaw stand out is the state in which it was released—security considerations were largely deprioritized in favor of usability and rapid adoption," notes security expert Aviad Cohen. The project's own documentation is criticized for not adequately emphasizing the risks of deploying a "highly privileged, autonomous agent." Marijus Briedis, CTO of NordVPN, states the security model "assumes a level of user expertise that most people do not possess."

For the average user deploying OpenClaw on a home server or VPS, the default settings are insufficiently secure. The tool requires extensive data, account, and network access permissions. ByteDance's Volcano Engine, while supporting deployment, explicitly warned developers to use a dedicated environment, avoid sensitive information, and strictly review permissions for cloud servers and API keys.

The Dual Threat: Malicious Skills and Prompt Injection

The community skills ecosystem has become a major attack vector. OpenSourceMalware tracked 28 malicious skills published in a single weekend in late January, followed by 386 malicious add-ons days later. These often masquerade as cryptocurrency trading tools designed to steal API keys, wallet private keys, and browser passwords.

Skills are often simple markdown files, which can contain hidden malicious instructions for both the user and the AI agent itself. One popular "Twitter" skill was found to contain a link that triggered the download of infostealing malware. While creator Peter Steinberger has implemented measures like requiring a week-old GitHub account to publish skills, the platform remains permeable.

Perhaps more insidious is the threat of prompt injection. As CrowdStrike's analysis highlights, this technique—where malicious instructions hidden in emails or documents hijack the agent—doesn't exploit software flaws but manipulates the AI's core function. The now-infamous Moltbook data leak, which exposed 1.5 million API tokens, illustrated these risks in practice.

Widespread Exposure and Vulnerable Deployments

The scale of the problem is vast. SecurityScorecard discovered over 40,000 exposed OpenClaw instances on the public internet. Alarmingly, 63% of these deployments were vulnerable, with 12,812 instances exploitable via remote code execution (RCE) attacks, granting total host control to threat actors.

Furthermore, they correlated 549 exposed instances with prior breach activity and 1,493 with known vulnerabilities. Three high-severity CVEs with public exploit code have already been identified. The most impacted industries are information services, technology, manufacturing, and telecommunications, with most exposures located in China, the US, and Singapore.

This creates a dangerous concentration of risk. "The more centralized the access, the more damage a single compromise can cause," SecurityScorecard warned. Gartner's recommendation was unequivocal: enterprises should "block OpenClaw downloads and traffic immediately," citing shadow deployments as creating "single points of failure."

Mitigation Is Possible, But Removal Is Tricky

For organizations or users determined to proceed, security firms advise a strict, zero-trust approach. This includes aggressively limiting permissions, adopting a "never trust, always verify" mindset for all agents and integrations, and treating every agent as a privileged identity capable of causing damage. Regular permission reviews and avoiding long-lived credentials are essential.

Even removing OpenClaw requires caution. According to OX Security, the program can leave behind user credentials and configuration files if not uninstalled meticulously, creating persistent security risks.

OpenClaw has attempted to address concerns, notably by integrating VirusTotal malware scanning for its skills marketplace. However, the team was candid about its limitations: "Let’s be clear: this is not a silver bullet. A skill that uses natural language to instruct an agent to do something malicious won’t trigger a virus signature."

The Enterprise Adoption Paradox

Despite the warnings, adoption is accelerating, particularly in enterprise environments. A stunning Gartner analysis from January 30th revealed that 53% of Noma's enterprise customers granted OpenClaw privileged access over a single weekend. This rapid, often unsanctioned deployment echoes classic shadow IT patterns but with far greater consequences due to the agent's pervasive access.

Major Chinese tech giants like Alibaba Cloud and ByteDance are rolling out official support and deployment guides, acknowledging the tool's popularity while including safety caveats. This creates a tension between market demand for powerful AI automation and the sobering reality of its associated risks.

Conclusion: A Powerful Tool with Immature Safeguards

OpenClaw represents a watershed moment in accessible, agentic AI, demonstrating immense productivity potential. However, its current state serves as a case study in the dangers of prioritizing growth over security foundations. The combination of insecure defaults, a vulnerable extensibility model, widespread exposed deployments, and the novel threat of prompt injection creates a perfect storm of risk.

For now, it remains a tool strictly for experts who understand network isolation, permission management, and secure tunneling. For the average user or enterprise, the consensus from cybersecurity professionals is clear: the convenience offered is currently eclipsed by the concentration of risk. Until a robust, built-in security framework matures, OpenClaw's gregarious capabilities come with equally gregarious insecurities.