A new report out today from Google LLC’s Threat Intelligence Group warns that there has been a major shift in cybercrime as attackers are no longer using artificial intelligence solely for productivity but are now deploying AI-enabled malware directly in active operations.
The GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools report highlights how state-sponsored and criminal groups are leveraging large language models such as Gemini and other publicly available systems to automate, adapt and scale up attacks across the entire lifecycle.
In a notable first, Google’s researchers have identified malware families, including PROMPTFLUX, PROMPTSTEAL and PROMPTLOCK, that integrate AI during execution to dynamically generate malicious code and obfuscate their behavior.
PROMPTFLUX, for example, interacts with the Gemini application programming interface to rewrite its own VBScript every hour, creating an evolving “thinking robot” that continually mutates to avoid antivirus detection. PROMPTSTEAL, used by the Russia-linked APT28 threat group, queries open-source language models on Hugging Face to generate Windows commands that harvest files and system data before exfiltration.
The report states that the rise of “just-in-time” AI attacks is a new milestone in adversarial use of generative models and represents a move toward autonomous, self-modifying malware. The researchers note that while many examples remain experimental, the trend signals how attackers will soon combine AI reasoning and automation to outpace traditional defenses.
Another area of concern raised in the report is social engineering aimed at bypassing AI safety guardrails.
Threat actors from Iran and allegedly from China were observed posing as students, researchers or participants in “capture-the-flag” cybersecurity contests to trick Gemini into providing restricted vulnerability or exploitation data. In one case, Iran-backed MUDDYCOAST accidentally revealed its own command-and-control infrastructure while using Gemini to debug a malware script, a mistake that allowed Google to dismantle its operations.
Not surprisingly, the underground economy for AI-driven hacking tools has also matured rapidly. The researchers found dozens of multifunctional offerings advertised in English and Russian-language forums, selling capabilities such as phishing-email generation, deepfake creation and automated malware development. Similar to software-as-a-service offerings, the tools are offered via subscription models, lowering the cost of entry.
State-sponsored groups were found to be the most prolific adopters. North Korea’s MASAN and PUKCHONG have used Gemini for cryptocurrency theft campaigns and exploit development, while Iran’s APT42 experimented with a “Data Processing Agent” that turned natural-language requests into SQL queries to extract personal information.
Google says it has disabled accounts and assets associated with these activities and used the intelligence to harden its models and classifiers against further misuse.
“The potential of AI, especially generative AI, is immense,” the report concludes. “As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.”
“Combined with recent reports by Anthropic about the use of Claude by attackers and by OpenAI about the use of ChatGPT, today’s report by GTIG confirms that attackers are leveraging AI to boost their productivity and sophistication, Evan Powell, chief executive at cyber defense platform DeepTempo, told SiliconANGLE. “The productivity of the attackers is increasing quickly, with other reports such as the Anthropic report showing that they are even planning and executing entire campaigns with speed and intelligence that humans cannot match.”
To address the increasing risk, Google offers the Secure AI Framework, a foundational blueprint aimed at helping organizations design, build and deploy AI systems responsibly. SAIF serves as both a technical and ethical guide to establish security principles that span the entire AI lifecycle, from data collection and model training to deployment and monitoring.
Image: SiliconANGLE/Ideogram
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
