For many years, state-sponsored hacking was defined by human expertise in finding security holes, writing malware and exploits, pulling off social engineering and phishing attacks, and much more.
Since the advent of LLM-powered AI assistants and tools, less skilled attackers have been able to carry out attacks and compromises that might otherwise have been out of their reach.
Case in point: HexagonalRodent. According to Expel’s research, the group makes heavy use of generative AI, with telemetry showing active use of Cursor (an AI-native code editor) and ChatGPT across their operations.
Who is HexagonalRodent?
HexagonalRodent is a state-sponsored North Korean APT group that’s, in Expel’s assessment, a subgroup or operational offshoot of Famous Chollima, which specializes in infiltrating companies by posing as legitimate, remote IT workers.
The group’s malware toolkit, BeaverTail, OtterCookie, and InvisibleFerret, is shared across several distinct clusters within the DPRK ecosystem, each with its own targeting priorities and operational style.
Some of these clusters conduct sophisticated intrusions into the networks of major crypto exchanges, but HexagonalRodent specializes in targeting individual Web3 developers.
Individual crypto investors and small blockchain projects often hold significant digital assets but lack enterprise-grade security infrastructure. Unlike major crypto currency exchanges, a solo developer with $400,000 in a software wallet is a soft target.
HexagonalRodent’s typical attack starts with social engineering targets to run malware. They do it by contacting Web developers with job offers, usually via LinkedIn.
The group also sets up elaborate fake company websites and fakes those companies’ LinkedIn presence, then lists job openings on Web3-focused career platforms.
After a target applies for a position, they are asked to complete a test of their coding skills. They open the project folder in VSCode, and trigger the execution of malware.
“Additionally, the skills assessments have backdoors in the actual code, which are designed to be executed when the code is run. This serves as a primary infection vector for targets who are not using VSCode, as well as a fallback in cases where the user opens the project in safe mode, or has VSCode tasks disabled,” Expel researcher Marcus Hutchins explained.
The group’s use of AI-powered tools
For HexagonalRodent members, AI has lowered the barrier to entry and enabled them to perform operations that once required fluent language skills, sophisticated code modification, careful persona management. These capabilities are now partially “outsourced” to commercial AI tools that were built for legitimate use.
The group uses:
- The AI-powered website design and development platform Anima to create fake company websites
- Cursor to develop new malware loaders, and
- ChatGPT to help them with things like password recovery and credential-security workflows, server and infrastructure security, developer troubleshooting, and crypto wallet recovery processes, and likely with the social engineering layer.
Expel notified both OpenAI and Cursor of the group’s activity. Cursor confirmed it had blocked accounts and IP addresses associated with the attacks, and OpenAI acknowledged that a small number of accounts sought help on topics with dual-use (i.e., positive and negative) potential, but said the interactions amounted to limited use rather than sustained malware development, and that safety systems redirected overtly malicious requests.
Still, Expel has found evidence of the group using two new tools that appear to have been “vibe-coded”.
“We also saw evidence of several of the threat actors prompting various US-owned AI models to audit their skills assessments’ code for malware. We believe this was likely part of an attempt to AI-proof their backdoors,” Hutchins noted.
“Previously, several of the threat actor’s campaigns had been burned as a result of their targets using AI to audit the skills assessment’s source code. Frontier AI models could often find the backdoors with ease, resulting in several targets publicly outing the threat actor’s personas.”
Why HexagonalRodent keeps succeeding
Expel’s extensive research has managed to map HexagonalRodent’s achievements in the last few months, and they are surprising.
“From victim IP addresses and system hostnames contained within the data, we are able to deduce that the threat actor’s campaigns exfiltrated a total of 26,584 cryptocurrency wallets from 2,726 infected developer’s systems,” Hutchins shared.
How much of the approximate $12 million dollars worth of crypto assets were stolen from these wallets is unknown.
The group’s stealthy approach and operation often goes unnoticed for a while and, because it doesn’t pursue lateral movement within corporate networks, it leaves a smaller forensic footprint.
Their vibe-coded malware also often flies under the radar of endpoint detection and response solutions. (Although, it also helps that individual developers are not always running EDR or other security solutions.)
“The group makes use of the commercial JavaScript obfuscator obfuscator.io, which is used by legitimate developers to protect their source code from reverse engineering and/or theft. This makes it extremely difficult to write antimalware signatures for, since the obfuscated JS malware just looks like any other JS obfuscated code,” Hutchins pointed out.
Finally, the group writes its malware in NodeJS and Python, two programming languages widely used by developers but rarely by malware creators, because those language runtimes aren’t installed on a typical computer.
But HexagonalRodent targets software developers, and they almost always have both installed. Better still for the attackers, seeing NodeJS code running on a NodeJS developer’s machine looks completely normal. The malware hides in plain sight by blending in with the victim’s everyday work.

Subscribe to our breaking news e-mail alert to never miss out on the latest breaches, vulnerabilities and cybersecurity threats. Subscribe here!

