Malware email attacks more than doubled in 2025, rising by 131% over the previous year, according to newly released industry data. The use of artificial intelligence and automation by cybercriminals is reshaping the threat landscape, with phishing and scams also seeing significant increases.
Email vectors
Analysis of more than 72 billion emails processed annually shows that email remains the primary channel for cyber-attacks. Phishing attempts rose by 21%, while email-based scams climbed by 34.7%. Security professionals are now contending with increasingly sophisticated attacks as threat actors deploy generative AI to craft convincing messages and execute multistage operations with limited human involvement.
AI-generated threats
A substantial 77% of Chief Information Security Officers (CISOs) identified AI-generated phishing as a serious and emerging threat within their organisations. Generative AI is enabling attackers to create deceptive emails, synthetic identities, and even impersonate individuals through voice cloning and deepfake videos. The findings show that 61% of CISOs believe artificial intelligence has made ransomware threats worse over the past year.
Faced with these risks, 68% of organisations invested in AI-powered detection and protection systems during 2025. Yet, traditional defences are under strain as AI-driven attacks blur the distinction between genuine and fraudulent communications, making it more difficult for legacy controls to keep pace.
Attacker tactics
Cybercriminals are increasingly leveraging generative AI and automation to identify vulnerabilities within corporate networks, create targeted phishing efforts, and coordinate complex, multi-stage intrusions. Autonomous AI-able to act without direct human supervision-has introduced new risks, complicating incident detection and response procedures.
“AI is both a tool and a target, and attack vectors are expanding faster than many realize. The result is an arms race where both sides are using machine learning. On one side, the goal is to deceive; on the other, to defend and forestall.
“Attackers are increasingly using generative AI and automation to identify vulnerabilities, craft more convincing phishing lures, and orchestrate multi-stage intrusions with minimal human oversight. The rise of agentic AI compounds this risk, as autonomous actions can occur without human oversight or established approval chains,” said Daniel Hofmann, CEO, Hornetsecurity.
Identity fraud
Among the most pressing concerns for CISOs is synthetic identity fraud-using AI to create false identification documents or credentials. This is compounded by deepfake technology, which can convincingly mimic voices or generate realistic video content used to trick or extort individuals or businesses. Other issues include model poisoning attacks, in which AI systems are corrupted with malicious data, and risks tied to employees using public AI tools without controls.
These developments are changing the way trust is targeted, shifting the emphasis away from traditional breach methodologies such as forced entry to more insidious efforts aimed at eroding trust in digital communications and identities.
Leadership awareness
While many organisations are strengthening technical defences and developing recovery processes, there is a wide gap in C-suite understanding of AI’s influence on cyber risks. CISOs report varying levels of awareness among board and executive teams, with only some exhibiting a comprehensive grasp of emerging issues linked to AI adoption.
Building resilience through cultural change, rather than relying solely on prevention, is becoming a central focus for security teams seeking to protect against the evolving threat environment. Board-led crisis simulation exercises and cross-functional response playbooks remain rare in most businesses.
“The results of our 2025 report demonstrate that organizations are learning to recover without negotiating. But in-house security awareness efforts need to evolve at the pace of AI adoption.
“Few boards run cyber crisis simulations, and cross-functional playbooks remain the exception rather than the rule. As AI-driven misinformation and deepfake extortion become more commonplace, a security culture of readiness, backed by an awareness of AI and the possibilities it creates, will have to be a focus for 2026,” said Hofmann.
