The OpenClaw autonomous artificial intelligence agent project has teamed up with Google’s VirusTotal malware scanning service, following the discovery of malicious code in the bot framework’s ClawHub skills marketplace.

One such malicious skill was discovered recently by Jason Meller at passphrase manager vendor 1Password, who found that a very popular skill for OpenClaw called “Twitter” had a required dependency listed.
That dependency turned out to be a multistep downloader for infostealer malware for Apple’s macOS operating system.
Meller confirmed this by scanning the binary file the skill downloaded at VirusTotal, which runs multiple anti-virus utilities to check for potential malware.
The infostealer malware is able to exfiltrate highly sensitive user data such credentials, browser sessions and cookies, application programming interface keys and more, which in turn can be used to take over accounts.
With the help of an OpenClaw bot, security vendor Koi audited the skills files on ClawHub, and found that out of 2857, 341 were malicious.
The vast majority – 335 – of the malicious skills files appear to be from a single supply chain attack attempt, Koi’s audit suggested.
Skills files for OpenClaw are written in Markdown format and extend the AI bot’s abilities.
OpenClaw said this powerful feature could be used for the bot to control smart home devices, managing finances, handling emails, and automating workflows.
At the same time, the OpenClaw project warned that a malicious skill could run unauthorised commands, exfiltrate information, send messages on the user’s behalf, and download and run external payloads.
Now, OpenClaw will check skills published to its ClawHub marketplace against VirusTotal’s database.
The team-up also gives OpenClaw access to the VirusTotal CodeInsight, which is a large language model (LLM) powered tool to analyse code for malicious traits, based on Google’s Gemini AI.
Nevertheless, the OpenClaw project warned that the VirusTotal scanning won’t catch all malware.
“A skill that uses natural language to instruct an agent to do something malicious won’t trigger a virus signature,” the project said.
Neither will a carefully crafted prompt injection payload show up in a threat database, it said.
Developed by Austrian engineer Peter Steinberger, the open source OpenClaw bot framework has shot up in popularity over the last few months.
OpenClaw bots can connect LLMs such as Anthropic’s Claude.ai (hence the claw and lobster puns for the project) to messaging platforms such as Signal, Telegram, Slack, Apple iMessage and Microsoft Teams.
The agents run with persistent memory, and can access user data, which brings serious security risks such as leakage of access keys and sensitive information such as private messages.
Security researchers strongly advise not connecting OpenClaw bots to personal or business data and instead suggest trialling the bot segmented away from sensitive information and systems, and to avoid exposing it to the internet.
Nevertheless, internet infrastructure scanning and mapping company Censys recently found that tens of thousands of OpenClaw instances were misconfigured and accessible over the internet, putting those running the bot at risk.
For its part, OpenClaw intends to sharpen its security posture, and bring in a broader security initiative.
This includes a comprehensive threat model, a security roadmap, details of the project’s security audit and a formal security reporting process.
The founder of Australia’s DVULN information security vendor, Jamieson O’Reilly, has been brought on to the OpenClaw Project as a lead adviser for the security program.
