Late last week, AI companyAnthropicreportedthat a Chinese-backed threat group had used its Claude model in a sophisticated cyberespionage campaign and tried to break into the systems of about 30 global companies and steal information.The unnamed group was successful in a few of the attempts, but it was the extent to which the bad actors used the agentic AI capabilities in Claude to set up and launch the attacks that was significant. According to Anthropic researchers, the attackers usedAI agentsto perform 80% to 90% of the campaign; human interaction entered the operation at four to six critical decision points for each hacking attempt.The San Francisco-based company suggested it was the first AI-orchestrated cyberespionage campaign reported, and an indication of what’s to come as bad actors become more proficient in their use of AI.“The barriers to performing sophisticated cyberattacks have dropped substantially – and we predict that they’ll continue to do so,” the researchers wrote. “With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator.”At the same time, “less experienced and resourced groups can now potentially perform large-scale attacks of this nature.”
Dovetailing with Google’s Research
Anthropic’s experience echoes the findings from Google’sCybersecurity Forecast 2026released the week before, which predicts that next year, “threat actor use of AI is expected to transition decisively from the exception to the norm, noticeably transforming the cyber threat landscape. … We anticipate threat actors will increasingly adopt agentic systems to streamline and scale attacks by automating steps across the attack lifecycle.”In an accompanyingblog post, the Google Threat Intelligence Group (GTIG) wrote that it has identified a “shift” over this year in how threat groups are using the fast-evolving technology.“Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains; they are deploying novel AI-enabled malware in active operations,” they wrote. “This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.
New Malware Families
Included in what GTIG found were malware families – like PROMPTFLUX (a dropper), PROMPSTEAL (a data miner), and PROMPTLOCK (ransomware) – that use large language models during execution to dynamically create malicious scripts, obfuscate their code to evade detection, and create malicious functions on demand rather than coding them into the malware.There’s also FRUITSHELL, a reverse shell, and QUIETVAULT, an info-stealer.“While still nascent, this represents a significant step toward more autonomous and adaptive malware,” the researchers wrote. “Although some recent implementations of novel AI techniques are experimental, they provide an early indicator of how threats are evolving and how they can potentially integrate AI capabilities into future intrusion activity.”
Underground Marketplaces for Illicit AI Models
GTIG researchers also found a growing market in underground forums for AI tools purpose-built to be used in malicious campaigns, supported by advertisements and promotions that mirror traditional marketing for their legitimate and conventional counterparts.“Our assessment of advertisements for illicit AI tools and services in underground forums highlights some common underlying characteristics of these forum posts, including promotion of multi-use tools, a focus on phishing activity, emphasis on customer support, and enthusiasm for uncensored outputs,”Toni Gidwani, security engineering manager for GTIG, told MSSP Alert. “Notably, the underground AI tools observed in these forums support various facets of the attack lifecycle, including phishing operations, malware development, vulnerability exploitation, research and reconnaissance, technical support, and image generation.”That said, one of the earliest ways generative AI was used after ChatGPT was released by OpenAI three years ago was to improve the language and spelling in phishing messages to make them more convincing to victims, and that continues.“Of note, almost every notable tool advertised in underground forums mentioned their ability to support phishing campaigns,” the GTIG researchers wrote.
Still Using Commercial Models
There were also a lot of conversations in the underground forums that compared illicit AI models with commercial ones in such areas as ease of use, performance, accessibility, and price.“These conversations highlighted underground users’ continued reliance on legitimate AI services to support their malicious activity, likely due to the ubiquity and reputation of these services in contrast to their illicit counterparts,” Gidwani said.Google researchers in the report noted other ways adversaries will use or compromise AI, including increasingly using prompt injection attacks to manipulate AI models to bypass their security guardrails and obey bad actors’ hidden commands. Some threat groups – Google pointed to the ShinyHunters extortion group as an example – will accelerate their use of AI in social engineering campaigns.AI agents also will play an increasingly prominent role, for both bad actors – as Anthropic wrote about – and cybersecurity analysts within corporate security teams and MSSPs and MSPs. “We expect to move past the model of analysts drowning in alerts, and into one where they direct AI agents” the report’s authors wrote. “This comes as the ‘Agentic SOC,’ where frontline intelligence effectively becomes the brain for these new AI partners. … The analyst’s job shifts from manual data correlation to strategic validation, letting them approve a SOAR containment action in minutes, not hours.”
