When it comes to cyberattacks across industrial environments, the role of AI (artificial intelligence) falls between real escalation and inflated alarm. Most alleged AI-enabled threats are not stand-alone systems running in isolation within OT networks. Rather, the bad guys are leveraging AI to speed up human-driven activity, such as automating reconnaissance, generating targeted phishing, and crafting functional exploit code within minutes. What previously took specialized teams and long development cycles can now be done in a matter of minutes across connected OT environments. This shift is not theoretical.
SANS analysis shows AI is already driving a sharp increase in the speed and scale of phishing and exploit creation across multiple phases of the attack lifecycle. At the same time, research like Check Point’s VoidLink study reveals how AI can assist in creating sophisticated malware frameworks, producing complex code structures in days that traditionally would take coordinated development efforts. This doesn’t mean fully autonomous weaponized AI has taken over OT attacks, but it does show AI is lowering barriers to high-complexity threats and amplifying human capabilities.
Data from ecrime.ch disclosed that ransomware actors posted 7,819 incidents to data leak sites in 2025. The U.S. was most heavily targeted, with nearly 4,000 incidents. Canada (over 400), Germany (292), the U.K. (248), and Italy (167) rounded out the top five targeted nations. Leading ransomware groups included Qilin, Akira, Cl0p, PLAY, and SAFEPAY.
Zero trust helps, as microsegmentation, strict authentication, and least-privilege policies can slow lateral movement and reduce exposure. But in OT environments with legacy systems and safety priorities, it can’t stop every adaptive adversary. AI-assisted attackers expose structural weaknesses in visibility, detection, and coordination between OT and security teams. Accountability gaps arise when defenders’ processes lag behind attackers’ speed. Building real resilience now means redefining assumptions about what attacks look like and how quickly they evolve, and embedding continuous learning into defense playbooks.
Industrial Cyber spoke with industrial cybersecurity experts to examine how AI is being practically used in attacks against OT environments today, and where clear gaps remain between credible threats and exaggerated narratives around AI-driven attacks.

“In the energy and manufacturing sectors, AI currently functions as a sophisticated technical force multiplier rather than an autonomous digital soldier,” Fernando Guerrero Bautista, OT security expert at Airbus Protect, told Industrial Cyber. “We see it practically applied, for example, in reverse-engineering proprietary industrial protocols or in the generation of highly targeted spear-phishing that mimics the technical lexicon of substation operators.”
Bautista added that the clear gap lies between the myth of the autonomous adversary and the reality of accelerated weaponization. “While we must be aware of these advanced threats, we shouldn’t over-engineer our defense. Building a resilient system doesn’t always mean buying the latest AI-powered security gadget. More often than not, the strongest shield we have is just getting the fundamentals right, like knowing exactly what’s on our network and keeping it patched, rather than chasing complex tools that haven’t been battle-tested yet.”

Paul Lukoskie, senior director of threat intelligence at Dragos, told Industrial Cyber that AI significantly lowers the barrier of entry for less sophisticated adversaries to successfully conduct more comprehensive and complex cyber-attacks because of its ability to help scale initial intrusion tactics such as social engineering. “AI can also be used to help discover vulnerabilities in specific technologies used by a desired target, write code to help malware bypass endpoint detection tools, and even optimize attack paths once the adversary has established a foothold within a victim organization’s environment.”
Providing one such validated example, Lukoskie pointed to the GTG-2002 and GTG-1002 campaigns that were both observed in 2025. “According to open-source intelligence (OSINT) reporting, the attacker was assessed to have used Anthropic’s Claude Code to automate several layers of the intrusion and post-compromise behavior to include reconnaissance, vulnerability scanning, lateral movement, and credential theft.”

Threat actors are using AI to enhance social engineering and identify new vulnerabilities at an unprecedented scale, Eric Knapp, product manager at Nozomi Networks, told Industrial Cyber. “AI empowers attackers throughout the attack lifecycle—from reconnaissance and planning to data exfiltration and execution. Humans remain the weakest link, and AI is designed to exploit those vulnerabilities relentlessly. With AI’s ability to analyze software at scale, we must assume the zero-day arsenal is growing even if defenders aren’t aware yet. This is why companies need robust SecRes investments and partnerships—no organization can combat this alone.”

Steve Mustard, an independent automation consultant and ISA Fellow, told Industrial Cyber that in OT, AI still has a limited impact on physical process manipulation, safety system exploitation, and autonomous end-to-end attacks. “However, AI attacks can support more subtle, persistent operational degradation—slightly reducing efficiency, increasing wear on machinery, or manipulating quality margins in ways that evade traditional control system alarms. These attacks aim to cause economic harm, erode safety, or undermine confidence rather than immediate disruption, making them harder to detect and attribute.”
Identifying that AI can be weaponized in more practical ways, he observed that the real threat lies not in AI replacing human attackers, but in lowering the skill threshold and compressing the time required for every element of an attack chain, from reconnaissance and initial access to discovery and command and control. AI reduces the friction that slows and deters cyberattacks.

Dennis Hackney, vice-chairperson of the ISA Global Cybersecurity Alliance (ISAGCA) Advisory Board, said that to his knowledge, there is no record to date of AI being used to completely dismantle an OT environment, including Supervisory Control and Data Acquisition (SCADA), Distributed Control System (DCS), or Programmable Logic Controllers (PLCs). “Instead, AI has been primarily used for data exfiltration through prompt injection, as applied support services for reconnaissance to discover Operational Technologies, accounts, passwords, and these control system functions.”
From his point of view, Hackney said there are three primary AI attack examples based on IT events in 2025 that, when combined with OT events, should alarm those in critical infrastructure. These are conjectures.
- If vulnerability swarming was used for browser exploits, such as the Chromium Zero-Day CVE-2025-14174, which demonstrated that remote code execution can be achieved via out-of-bounds memory in the browser. Browsers are the new middleware for industrial PCs.
- Agentic AI, like the attack using Anthropic’s Claude, to exfiltrate financial data, instead of being used for edge device compromise, specifically enterprise routers, VPN concentrators, and network management appliances supported by third-party vendors, leading to advanced persistence, data exfiltration, and credential harvesting.
- If DevOps pipeline automated service prompt injection (i.e., PromptPwnd) were to be instead used for support of infrastructure like virtual control servers, leading to the possibility of mass disruption. Additionally, AI model poisoning is also a concern for predictive maintenance, optimization, and digital twins. Actually, as AI use expands, concerns are growing exponentially.
The experts address which phases of the OT attack lifecycle are most vulnerable to AI-driven acceleration, from reconnaissance and lateral movement to the manipulation of industrial processes. Against this backdrop, they assess how far zero trust principles can realistically go in constraining the impact of AI-enabled adversaries.
“The reconnaissance and lateral movement stages are the most exposed to AI acceleration. AI-driven tools can map a utility’s network topology and identify ‘hidden’ pathways between corporate VPNs and critical control zones faster than humans,” Bautista said. “In this context, Zero Trust principles act as a form of digital interlocking. Much like a physical interlock prevents a high-voltage switch from opening under an unsafe load, Zero Trust ensures that even if an AI-assisted attacker compromises a valid credential, they have a hard time moving deeper into the process level or issuing a ‘trip’ command without a secondary multi-factor verification. Therefore, Zero trust not only locks attack pathways, but enforces operational Integrity.”
He added that beyond simple access control, Zero Trust acts as a ‘sanity check’ for the grid by enforcing contextual validation. While an AI might use stolen credentials to mimic a human, Zero Trust scrutinizes the intent and physics of every command, verifying if a request matches scheduled maintenance, originates from an authorized workstation, or even makes sense for the current state of the power flow.
Lukoskie also said that reconnaissance is likely going to be the most realistic use case for AI in an attack against an OT environment because an adversary could use it to quickly get smart on a specific OT asset and determine what types of vulnerabilities may exist, and even determine what methods could be used to exploit the vulnerability. “AI could also be used by an adversary to assess how an organization’s OT environment is structured because there are many use cases on the internet that capture common network maps for different sectors. It may not be a true 1:1 for the targeted organization, but the information provided by the AI generator could be enough to help the adversary more logically dictate their next steps.”
“In an OT environment, networks should already be fairly segmented, and communications between assets like PLCs, HMIs, and workstations should already be limited to only necessary communications,” according to Lukoskie. “Theoretically, this level of segmentation, combined with strict authentication practices, should drastically limit the ability for an adversary to effectively (or easily) use AI to attack or manipulate an organization’s OT assets.”
However, he added that zero trust isn’t always a realistic option for many OT asset owners because of things like legacy systems and proprietary protocols like Modbus, which lack built-in security, encryption, and identity authentication practices. Moreover, many industrial organizations prioritize availability and safety over security, and zero trust could impact a system’s efficiency through its micro-segmentation and continuous monitoring principles.
“Reconnaissance is most exposed to AI threats. The easiest entry point remains legitimate credentials obtained through subterfuge, theft, or brute force– all areas where AI excels,” Knapp said. “AI quickly identifies vulnerabilities in siloed systems that operators may lack visibility into. Once inside systems, attackers can change parameters, inject malicious commands, and cover their tracks while reporting expected results. This makes continuous monitoring critically important.”
Mustard said that AI has the most impact in various stages of the attack chain. Reconnaissance synthesis of large volumes of data, including organization intelligence (org charts, vendor lists, tech stacks), public data and job postings, and technical documentation (network diagrams, engineering drawings, specifications, and configuration files). In initial access, crafting of highly targeted social engineering campaigns; defense evasion helps shape behavior to blend into normal activity and timing actions to avoid detection windows; discovery maps networks, trust relationships, and roles; and lateral movement selects optimal paths based on access and adapts movement strategy when blocked.
“In this context, zero trust principles offer real value,” Mustard pointed out. “Strong identity controls, network segmentation, and least-privilege access can significantly reduce the speed and reach of AI-assisted adversaries. However, zero trust is not a silver bullet; once attackers achieve legitimate access or compromise trusted engineering workflows, AI can help them blend in more effectively rather than bypass controls outright.”
Hackney said that currently, reconnaissance appears to be the most exposed, with data exfiltration being the primary goal of AI-based cyber events. “The data being stolen includes critical infrastructure details as well as accounts and passwords. There are warranted concerns in industry about AI model poisoning and its impact on optimization, preventive maintenance, and the viability of digital twins. This concern shifts the exposure from reconnaissance to impact.”
In these examples, he added that an effective zero-trust architecture would be ideal, with checkpoints at every endpoint. Specifically, strict separation between AI-driven processes, such as optimization and digital twins, and the plant process control should be enforced.
The executives look at how AI is reshaping the impact of industrial cyber incidents, particularly when attackers focus on subtle, persistent operational degradation or economic harm rather than immediate disruption.
Bautista mentioned that AI changes the nature of impact by enabling subtle operational degradation rather than immediate, loud disruptions. “Instead of a sudden blackout, an attacker can use AI to mask minute manipulations of voltage regulation, frequency response, or chemical mixtures. These changes are designed to mimic normal grid noise or equipment aging, enabling a persistent presence that undermines long-term reliability without triggering safety systems. It turns a cyber incident into an economic siege that can ruin hardware and profitability for years.”
Theoretically, Lukoskie said that AI could significantly change how adversaries approach long-term cyber-attack campaigns against industrial organizations for one primary reason: AI could help an adversary increase the probability of a successful attack model without causing catastrophic disruption.
“In a hypothetical example, once an adversary gains access to an industrial organization’s OT environment and they’re undetected, they could use AI to analyze any captured OT telemetry to help identify which assets or systems are going to provide the most value with the least risk of detection,” according to Lukoskie. “Further, AI can be used with machine learning models to understand and mimic normal operational behavior, thereby enabling long-term persistence within the victim’s environment.”
Lastly, he believes that AI enabled the adversary to remain undetected for so long within the victim’s environment; any subtle degradation would likely cause confusion and uncertainty within the victim’s environment, leading to potentially costly and time-consuming maintenance and troubleshooting.
Underlining that AI learns and adapts by design, Knapp said that when attackers lie in wait, AI helps them evade detection longer and identify new vulnerabilities throughout systems. “They can better understand mission-critical assets and how to inflict maximum damage. Previously, crafting cyber-physical attacks required extensive data and expertise. Now, once attackers access process control system information, AI makes interpreting and analyzing that data almost trivial. AI makes incidents more impactful and dangerous while also enabling subtler, more covert attacks.”
As an OT practitioner, Hackney said that he is particularly concerned with AI-driven automated support models, those promising the gains in automated ticket handling and infrastructure updates, those that traditional IT operations dream of.
“In OT, management of change is a process of absolute control over the tiniest change, including a single software update,” he added. “With automated Continuous Integration/Continuous Delivery (CI/CD) pipelines, software updates are managed through automated processes, including those for operating systems. When properly designed, checks and safeguards can be built in to ensure CI/CD is safe; however, with a single lucky exploit, an adversary might be able to breach or disrupt all CI/CD-managed systems, taking down the critical infrastructure.”
The executives focus on why many existing OT security controls and detection approaches are struggling to keep pace with adaptive, AI-assisted threat actors, and where the largest structural and organizational gaps remain.
“Many existing OT security controls struggle because they are signature-based, meaning they only recognize ‘known’ bad files or patterns. AI-assisted threats are polymorphic, adapting their behavior in real-time to blend in with legitimate industrial traffic,” according to Bautista. “The most significant structural gap is the ‘Context Gap’ between IT and OT. Most security teams are trained to defend data packets but lack an understanding of load shedding or grid stability. Conversely, plant operators understand the physics but may not recognize a cyber-anomaly masked as a process fluctuation. AI exploits this vacuum, hiding its activity in the space where digital security ends and physical engineering begins.”
Lukoskie said that the biggest structural and organizational gaps are those that still exist around treating OT cybersecurity the same as an organization treats IT cybersecurity. “Or, in some cases, cybersecurity best practices are only really applied to the IT environment because the frequency of cyber-attacks is far greater against IT than OT. Another gap exists in how industrial organizations heavily rely on vendors for monitoring, maintenance, and security updates. Vendors will often perform those tasks remotely, which inherently introduces risk for the industrial organization in the event of an adversary using AI to enable attacks against third-party vendors or even supply chain elements.”
“Traditional OT security was built for a different threat landscape. Many industrial environments run decades-old technology never designed with security in mind, and the protective air gap was eliminated long ago,” Knapp said.
He added that AI-enabled adversaries automate reconnaissance and adapt faster than human defenders respond. “The fundamental gap: you can’t protect what you can’t see or understand. Many operators lack visibility across connected systems, and security teams often lack a real understanding of process control systems, making it difficult to interpret critical data. I’ve seen examples of security teams actually turning off or turning down the data from OT because the SOC doesn’t know what to do with it.”
“Many existing OT security controls struggle to keep pace with such adaptive threats because they are static, signature-driven, and siloed,” Mustard said. “Detection models often assume known failure modes, stable baselines, and human-paced adversaries. Organizationally, gaps between IT security, engineering, and operations further slow response and obscure accountability, giving adaptive attackers room to maneuver.”
Hackney disclosed that traditional security operations rely on IT-born threat detection, signature-based systems, and anomaly-based systems, which are not fully integrated with all access control mechanisms in the critical infrastructure. “Anomaly-based approaches are a step up; however, just as with signatures, they must be defined to be effective. AI, specifically agentic AI, can alter the tactics and technologies (malware) used in exploits at a rate and in a pattern that are too rapid and unpredictable for traditional security operations technologies to keep up with. While AI models are advancing security operations, many believe the only practical way to stay ahead of the AI adversary is to go on the offensive.”
The executives explore how industrial organizations must rethink incident response, governance, and accountability as adversaries gain the ability to observe, learn, and adapt faster than traditional human-led defensive processes.
Bautista said that “we must strengthen our ‘Human-ON-the-loop’ oversight. When an adversary adapts at machine speed, waiting for a committee to authorize a shutdown is a failure. Governance must empower systems to enter a “deterministic safe state”, an automated, pre-authorized posture that protects physical equipment while humans oversee recovery. Resilience must be treated as a core engineering requirement, not a secondary IT task.”
Lukoskie said that industrial organizations must first accept that AI, machine learning, and large language models are now embedded in the threat landscape, and that adversaries will inevitably find ways to exploit them for a range of cyber objectives. In response, organizations should look to continuous detection and monitoring approaches that use AI and machine learning to surface anomalous activity, including subtle behaviors that human-led security teams are likely to miss, while also developing gray-zone response playbooks for non-catastrophic events such as system degradation or operational slowdown.
At the same time, organizations need clearly defined IT-OT decision rights before an incident occurs, supported by a unified governance framework that establishes system ownership, security responsibilities, triage authority, and supply-chain accountability. Finally, AI should be treated as a force multiplier for accountability, enabling organizations to model incident response policies, validate practices, and support functions such as KPI development, segmentation strategies, data quality tracking, and structured post-incident decision-making.
“Organizations must assume compromise is inevitable,” Knapp said. “This means understanding your process environment and maintaining practiced response and recovery plans. In OT systems, you must restore operations alongside forensics and remediation– it’s more than just backup restoration. AI can help defenders here, and not just through monitoring environments. These are AI-powered assistants that provide understanding during peacetime, early attack stages, and recovery, where understanding is key to success.”
To respond effectively, Mustard said that industrial organizations must rethink incident response and governance. “Faster adversaries demand pre-authorized response actions, clearer decision rights, and tighter integration between cyber, safety, and operations teams. Human-in-the-loop remains essential, but humans must be supported by automation that accelerates containment and learning without sacrificing safety.”
Hackney called for learning everything possible about AI, AI attacks, and AI defenses. Dedicate a team to it. “Stand up an AI function led by an AI Executive in charge of managing AI risks. Ensure that the organization’s security operations have a direct (non-reporting) channel to the AI Risk Executive to manage cyber breach risks.”
Looking ahead, the executives assess what meaningful resilience looks like in an OT threat landscape shaped by AI, and which long-standing assumptions in industrial cybersecurity are becoming obsolete or increasingly risky.
“Meaningful resilience is defined by Graceful Degradation, or the ability to keep ‘black start’ capabilities intact and the neighborhood energized even when the digital layer is compromised,” Bautista outlined. “We are seeing a breakdown in the long-standing assumption that ‘air-gaps’ or ‘obscure protocols’ provide security; in the age of AI, every technical manual is an open book. True resilience requires a shift back to engineering fundamentals: assuming the digital wall will be breached and ensuring that human operators can still pull the plug on ‘smart’ features to run the grid or plant manually.”
“Resilience should move beyond the traditional ‘block and tackle’ techniques that heavily focus on network deterrence, patching, and so on,” Lukoskie said. “AI-enabled adversaries are something every organization must plan for to ensure continuity of operations.”
Lukoskie pointed to some areas that AI can help shape an organization’s OT cybersecurity programs, including AI-led dynamic threat modeling and supply chain integrity with AI-led verification. Additionally, industrial organizations should be thinking about AI-led resilience practices, as there are also several long-standing assumptions that need to fade into the proverbial sunset.
Knapp said that meaningful resilience requires adapting the tenets of cybersecurity that many of us take for granted. “We moved from ‘trust but verify’ to ‘zero trust,’ and now it’s time for ‘zero trust but still verify anyway.’ This means constantly watching and maintaining situational awareness and situational understanding, which in turn means more information to keep track of.”
He added that when people say that AI-powered defense is the new necessity, “it’s because we don’t have the capacity as humans to process that much data in a way that leads to understanding. That’s where AI is helping us the most today: not just in being better at detecting threats but in understanding them.”
Mustard remarked that meaningful resilience in an AI-shaped OT threat landscape is less about perfect prevention and more about graceful degradation, rapid recovery, and continuous learning. “Long-standing assumptions—that attackers are slow, that failures are obvious, or that perimeter defenses are sufficient—are increasingly risky. Resilient organizations will be those that assume compromise, design for adaptability, and treat cybersecurity as an integral part of ongoing operations rather than a distinct and separate discipline.”
Hackney said that in the world of modern AI, network, logical, and digital segmentation is more important now than ever before. “Critical systems and critical processes, especially those that serve as safety functions, are best isolated using practices like ISA/IEC 62443 zones and conduits. Beyond physical separation, more can be learned about software development, DevOps, and digital identity management in the industrial sector. This is a space where advancements in software integration and management promise to add many efficiencies, specifically in non-safety-critical functions.”
He concluded that now, OT security professionals are investing in learning software security skills, such as threat modeling, CI/CD pipeline vulnerability scanning, and mitigation. Cybersecurity standards are sure to follow suit.
