
Over the years, I’ve had a front-row seat to how close OT cybersecurity has come to addressing some of its hardest problems.
Not because of a lack of capable vendors or thoughtful engineers. Quite the opposite. Many of the technical building blocks have existed for some time, often in plain sight.
What was missing wasn’t innovation. It was a way to measure value that leadership could recognize, defend, and act on.
In several roles, I participated in the evolution of the discipline and saw moments where critical pieces nearly came together. Most recently, configuration integrity on one side and network visibility on the other. Each offered an important but incomplete truth. Together, they hinted at something far more powerful: the ability to understand whether industrial systems were operating within safe, known, and governed bounds.
At the time, that synthesis was difficult to justify commercially. Not because it lacked merit, but because we were still evaluating cybersecurity primarily through threat-centric measures that obscured reaping of its operational and business value.
The industry didn’t lack data or capability. It lacked a way to measure cybersecurity value in terms leadership could defend and invest in.
As I reflect on those moments, one thing is clear: the industry never lacked engineering rigor or data.
What it lacked was a shared understanding of which data mattered, how it interrelated, how it should be interpreted, and how it could be combined to provide evidence of reduced consequence.
When we limit “security-relevant data” to control systems and what looks like a threat, we unintentionally overlook some of the richest sources of truth already present across OT environments. Many of these sources do more to prevent undesirable consequences than traditional security signals alone. To be clear, I have never said the current OT Security practices are wrong, only incomplete. ICS security matters, but it is only one contributor to overall security and resilience.
This is the gap that consequence-oriented thinking exposes.
A consequence-based lens, such as the one formalized in CIE-CORE and operationalized through CDPV which, at its core, does not introduce exotic new telemetry. It reconnects what already exists but lives in silos. Opportunities will surface as the CIE-CORE moves up and down the intended processes.
Intended State (Engineering Truth)
- Control system configuration files
- PLC and DCS logic
- Safety system setpoints and trip thresholds
- Approved baselines and change histories
These define what the system is supposed to do.
Observed State (Operational Truth)
- Process telemetry and historian data
- Mode changes, overrides, and alarms
- Control loop behavior and stability
These reveal what the system is actually doing.
Asset Condition and Reliability (Failure Truth)
- Maintenance work orders and backlog
- Failure modes and corrective actions
- Deferred maintenance and asset aging
These show where risk is quietly accumulating.
Human Interaction (Dependency Truth)
- Operator interventions and acknowledgements
- Manual overrides and procedural workarounds
- Training currency and role qualification
These expose where resilience depends on people rather than design.
Governance and Decision Records (Authority Truth)
- Change approvals and exceptions
- Risk acceptance decisions
- Temporary compensating controls
These capture why certain risks exist and who owns them.
Cyber and Network Telemetry (Signal Truth)
- East–west communication patterns
- Remote access usage
- Authentication and session behavior
These provide signals, but rarely conclusions on their own.
Individually, none of these sources failed.
Collectively, they were never merged under a shared security lens.
Each data source was gathered for a specific operational purpose and managed within its own silo, with value articulated in domain-specific terms. Because our security measurement systems asked, “Is this a threat?” instead of “Does this create unacceptable consequence?”, these sources were never interpreted as security-relevant evidence. It’s like we were looking left to cross a London street. They were treated as operational responsibility and exhaust rather than inputs to security decision-making.
As a result:
- Their combined value was invisible
- Investment was hard to justify
- Orchestration looked like cost instead of leverage
When we work backward from undesirable outcomes at Layers 0 and 1, a different picture emerges.
Security is no longer something we bolt on through patch cycles or vulnerability counts. It becomes an emergent property of systems that are:
- Operating within intended bounds
- Continuously validated against real conditions
- Governed with explicit authority and accountability
Progress toward that state is measurable. Once it is measurable, it becomes fundable.
I often use a simple analogy.
You don’t apply your budget like you’re buttering toast, spreading it evenly across the surface. You apply it like you’re buttering a waffle, concentrating where the holes are.
Those holes already exist, not in technology, but in how we measure value across the industrial security fabric. The data that reveals them already exists too. What’s been missing is a way to assemble those signals into a coherent view of consequence that leadership can trust.
This is why the conversation around orchestration matters now.
Not orchestration between security tools alone, but across engineering, operations, maintenance, and governance. When those sources of truth are finally evaluated through consequence instead of threat, cybersecurity stops being an expense and starts behaving like an investment.
If these sources of truth already exist, what would change if we finally measured them differently?
