AI-native malware, deepfake fraud and automated attacks on connected devices are set to drive enterprise cyber risk in 2026, according to new forecasts from VIPRE Security Group.
The company expects criminal groups to move beyond experimental use of generative AI and build fully automated attack systems. It also anticipates a sharp rise in regulation of AI and data protection, which it says will increase pressure on organisations to improve staff security awareness.
Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, set out the firm’s view of the key trends that will shape the threat landscape next year.
AI-native malware
VIPRE predicts that 2026 will be defined by what it describes as AI-native malware ecosystems. These attack tools will change their own code continuously and react in real time to defensive measures. The firm says such software will evade traditional, static detection techniques and shorten the timeline from initial probing of a network to full compromise.
Attackers are expected to use large language model engines to assemble autonomous exploit kits. These systems will scan for unpatched vulnerabilities, generate tailored payloads and execute attacks without direct human control.
VIPRE expects this automation to lower barriers to entry for less skilled criminals. It warns that small and mid-sized enterprises are at particular risk as attackers look for “springboard” access into larger partners and customers through supply chains.
Deepfake fraud services
The spread of deepfake tools for fraud is forecast to continue, with VIPRE pointing to the rise of marketplaces that package such tools as subscription services. The firm expects these platforms to offer voice and video impersonation based on material harvested from public sources.
According to the outlook, criminals will use such services to impersonate executives, suppliers and IT staff during targeted scams. VIPRE believes this will drive a new wave of business email compromise incidents.
Predicted attack scenarios include fraudulent payment instructions, social engineering of multi-factor authentication resets and false customer support interactions that harvest credentials.
VIPRE links this risk to the normalisation of remote and hybrid work. It says staff will find it harder to distinguish genuine interactions from synthetic ones when deepfake content is combined with detailed background information taken from social media and other open sources.
IoT and OT exposure
The group expects a surge in attacks on internet of things and operational technology systems as AI tools scan large numbers of devices for weaknesses. It notes that connected medical equipment, industrial control systems and other smart devices are expanding the attack surface for many sectors.
AI-driven scanning tools are forecast to identify misconfigurations, weak authentication and outdated firmware faster than human-led assessments. VIPRE warns that this will increase pressure on operators of critical infrastructure, logistics networks and healthcare services.
Potential outcomes include operational downtime, altered sensor readings, disrupted manufacturing and service delivery, and ransomware that halts essential processes. VIPRE highlights zero-trust style network segmentation, continuous device monitoring and structured patching programmes as likely areas of focus for security teams.
Supply chain focus
Software supply chain attacks remain one of the most efficient routes to large-scale access, and VIPRE expects adversaries to expand this approach with AI support. It forecasts broader use of AI-generated exploit code and automated identification of weaknesses in software dependencies.
Likely techniques include injecting malicious components into popular open-source projects and compromising third-party service providers. Attackers may also use AI systems that mimic developer coding styles, which could make it harder for reviewers to spot harmful changes.
Automated bots are expected to scan source code repositories for exploitable misconfigurations. VIPRE says enterprises will need to tighten software integrity checks, reinforce secure development practices and extend automated monitoring across their software supply chains.
Rising regulatory pressure
Governments in multiple regions introduced or advanced AI and privacy rules this year. VIPRE expects 2026 to bring a further round of measures that increase compliance demands on organisations handling sensitive data or deploying AI systems.
It points to a strengthening EU AI Act with additional operational checkpoints, expanded state-level privacy and algorithmic accountability requirements in the US, and AI transparency and risk frameworks in several Asia-Pacific markets. It also notes emerging global proposals that would require reporting of AI-generated cyber incidents.
Against this backdrop, VIPRE says human error remains the main cause of compliance failures and many costly breaches. It cites misdirected communications, weak handling of customer data and poor verification processes when dealing with deepfakes as ongoing problems.
The company links these issues with a need for more practical, scenario-based security training for employees. It argues that such programmes are becoming a central part of evidence of compliance and risk management.
