As organizations accelerate the adoption of autonomous and agentic AI, Cloud Range, announced on Wednesday the release of its AI Validation Range. AI Validation Range is a safe and contained virtual cyber range platform where organizations can test, train, and validate AI models, applications, and agents, without exposing real data in production environments.
AI adoption is accelerating faster than most organizations can meaningfully validate its security. Security teams are asked to integrate and defend AI systems that they didn’t design and can’t safely evaluate in production.
With AI Validation Range, organizations can verify AI performance and reliability before deployment by testing and measuring how models respond to real adversarial inputs and uncertainty. For organizations integrating agentic AI into SOC, cyber defense, and offensive security workflows, AI Validation Range supports training agents on real systems and observing how they interact with live infrastructure and security controls.
This controlled approach to developing AI agents and assessing models is a critical step before those systems become part of daily security operations. It gives security and engineering teams concrete insight into AI reliability, decision logic, and failure modes, allowing them to establish guardrails, refine oversight, and reduce risk.
For example, using Cloud Range’s vast catalog of real-world attack simulations and suite of licensed security tools, organizations can safely test AI models for data leakage, logging behavior, and unintended outputs within realistic IT and OT/ICS environments. They can also train agents on offensive security objectives – such as vulnerability discovery to scan networks, find exposures, and validate real threats – as well as defensive measures – including identifying malicious behaviors, threat detection, and faster alerts.
Cloud Range’s AI Validation Range is designed to rigorously test and prepare AI systems for real-world cybersecurity operations. It enables adversarial AI testing by simulating realistic cyberattacks, allowing organizations to evaluate how AI models and autonomous agents detect, respond to, and adapt under hostile conditions. The platform also supports agentic SOC training, conditioning AI agents to defend against live attack scenarios, workflows, and response actions within a secure, non-production environment.
To ensure operational readiness, the range measures AI performance against defined security controls, helping organizations assess production readiness and identify gaps before deployment. It supports governed, repeatable experiments that allow teams to run controlled scenarios consistently, enabling ongoing validation, tuning, and improvement. All of this takes place within a secure, isolated range environment that protects production systems and preserves model data integrity while delivering high-fidelity simulations and hands-on training.
“For years, Cloud Range has helped organizations know how to perform under real attack conditions. Applying that same simulation rigor to AI allows organizations to measure how AI agents and models perform side by side with human defenders, using the same scenarios, tools, and pressures,” said Cloud Range CEO Debbie Gordon. “That comparison is critical to understanding where AI truly strengthens security and where human judgment still matters most.”
By grounding AI evaluation in the same environments used for live-fire cyber training, Cloud Range helps organizations move beyond theoretical risk assessments to evidence-based decision-making. Security leaders gain clarity on how AI systems perform within existing processes, where safeguards are required, and how responsibility should be shared between automated systems and human teams. This enables organizations to operationalize AI with confidence, aligning innovation, security, and accountability before AI becomes embedded in mission-critical workflows.
From early pilots to AI-first operations, Cloud Range supports organizations throughout their AI journey.
