More

    AI-powered Cyber-Attacks Up Significantly in the Last Year, Warns CrowdStrike

    The number of AI-enabled cyber-attacks has nearly doubled during the last year, CrowdStrike has warned, as threat actors deployed machine learning and Large Language Models (LLMs) to help optimize attack techniques and hacking campaigns.

    According to the CrowdStrike Global Threat Report 2026, there was an 89% increase in attacks by “AI-enabled adversaries” in 2025 when compared with the previous year.

    Attackers deployed AI to aid with social engineering, malware development, disinformation campaigns and more.

    Researchers noted that when attackers use AI, it is to help optimize existing attack methods, rather than leveraging AI to help create novel new attack vectors.  

    For example, threat actors have deployed LLMs to help write phishing emails to make them appear more convincing – even in multiple languages – while also reducing the amount of time needed to create the campaigns.

    The report detailed several examples of this, including a campaign attributed to the Chinese intelligence service which leveraged AI to help create credible looking consulting firms to target former US government employees on recruitment and social media platforms, with the aim of intelligence gathering.

    Meanwhile, a Russian-based cyber-criminal operation – which CrowdStrike has dubbed Renaissance Spider – has been detected using AI-based tools to help increase the credibility of phishing emails used to deliver ClickFix campaigns to Ukrainian-speaking targets.

    Outside these two examples, CrowdStrike warned that a range of threat actors have deployed AI-related tools to help develop, organize and scale phishing operations.

    “These tools allow threat actors to plan and accelerate reconnaissance operations, create convincing phishing messages and landing pages, conduct spamming activity, and bypass restricted AI tool safeguards to produce illicit content,” said the report. 

    The report has also detailed how certain threat actors have started to experiment with using AI to aid the development of malware. This includes a campaign by Russian state-backed hacking and espionage operation Fancy Bear, which CrowdStrike analysts observed embedding LLM prompting directly into malware to perform operational tasks.

    Dubbed LameHug, the espionage campaign against Ukraine incorporated a LLM into the malware to support reconnaissance and document collection prior to exfiltration

    While researchers noted that LameHug “did not demonstrate a meaningful increase in effectiveness or sophistication compared to traditional malware,” they said the campaign showcased “continued exploration of AI as a development aid.”

    “This is another area where AI can enable the threat actor and we expect to see more of this,” said Adam Meyers, head of counter adversary operations at CrowdStrike said during a media briefing ahead of the report’s publication.

    CrowdStrike concluded with a warning that attackers will continue to leverage AI for a range of malicious activities.

    “To defend against AI-enabled threats, organizations should develop clear incident response responsibilities and business continuity plans,” said CrowdStrike.

    The company recommended that organizations can help protect employees, clients and customers from AI-enabled attacks with strong identity verification procedures, AI-focused security awareness training, and threat intelligence monitoring.

    “This is an AI arms race,” said Meyers.  “Security teams must operate faster than the adversary to win.”

    Image Credit: PJ McDonnell / Shutterstock.com

     

    Latest articles

    Related articles