More

    ‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technology

    Attack surfaces are expanding at a rapid pace thanks to enterprise AI adoption, according to research from Zscaler, and it’s placing huge strain on cybersecurity teams.

    Findings from Zscaler ThreatLabz’ 2026 AI Security Report show enterprises now face a confluence of threats. The rise of ‘shadow AI’ combined with machine-speed intrusions and the use of AI among threat actors means time-to-compromise is plunging.

    Deepen Desai, former CSO at Zscaler and head of security research at the ThreatLabz division, told ITPro the first of these issues is a natural byproduct of the industry’s rapid pivot to AI over the last three and a half years.

    Employees have been experimenting with exciting new tools for some time now, often without considering the potential security risks associated with unauthorized AI solutions.

    This creates a huge blind spot for enterprise security teams and creates the risk of disastrous data leakage. Research from Gartner in November 2025 projected up to 40% of enterprises globally will experience a shadow AI-related breach by 2030, underlining the growing risks associated with this trend.

    This is an issue that can be remedied by robust internal guardrails, however. It’s the use of AI by threat actors that has alarm bells ringing for Desai and counterparts across the industry.

    Hackers have already been observed using the technology to fine-tune phishing and vishing attacks, for example, but in recent months they’ve begun flocking to these tools to build and refine malware.

    “It all started with phishing and vishing and their standard initial access attacks, where their goal is just to compromise a credential or an identity,” he told ITPro.

    “But soon we also started noticing malware created using AI, and we are able to tell that because when we reverse those payloads, we’re able to see the comments that a lot of these AI coding tools will add, which is very, very typical.”

    Desai highlighted one recent incident observed by Zscaler in which AI-powered malware was connected to a Google Sheets document to support the attacker when executing commands.

    “The malware that was deployed in this victim’s environment would connect to a Google Sheet, which had two columns in it,” he explained.

    “One column where the attacker is entering commands that this malware needs to execute on the victim environment, and the second column where the malware will update the results of what came out when it ran these commands.”

    “Whether it was for data exfiltration, whether it is for downloading a new payload or giving that context for the attacker to perform future commands, it was literally being updated every few minutes,” Desai added.

    Concerns about AI-powered malware have been gaining momentum over the last 18 months, with security experts warning hackers are increasingly relying on the technology to build potent new strains.

    Research from TrendMicro in September 2025 found threat actors are “vibe coding” malware based on dissected threat intelligence reports, allowing them to reverse engineer particular strains and speed up attacks.

    Similarly, just last week Google warned hackers have been abusing its Gemini AI models to build malware.

    The use of AI in these instances is helping to speed up attacks, Desai told ITPro, posing huge challenges for security teams. Combine this with the fact that many of the AI systems used by enterprises are highly susceptible to compromise, and teams now face overlapping security considerations.

    Analysis from the company found many enterprise AI systems “break almost immediately” when tested under adversarial conditions. The median time to first critical failure, for example, was just 16 minutes, and 90% of systems were compromised in under 90 minutes.

    “They are able to move fast now because of the same efficiencies that we’re seeing on the production side,” Desai told ITPro, adding that security teams will likely find themselves fighting off AI-powered attacks with their own internal tools in the near future.

    “You have to use AI to fight AI driven attacks,” he said. “You need to apply AI at all stages of the attack to detect phishing, to detect malware, to detect exfiltration, to detect command and control activity.”

    FOLLOW US ON SOCIAL MEDIA

    Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

    You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

     

    Latest articles

    Related articles