Rising loss potential, AI-driven threats and legacy tech exposure are forcing insurers and buyers to rethink cyber limits, coverage design and risk monitoring

Cyber risk is no longer defined by a single breach scenario or a narrow set of controls. As threat actors multiply and attack techniques become more sophisticated, businesses face a harder reality: the scale of potential loss has increased, and many existing cyber insurance programs are no longer calibrated to that exposure.
Andy Lea (pictured), chief insurance officer for professional lines at Embroker, said the most significant shift he sees is structural rather than theoretical. “With exposures increasing, threat actors increasing, threat vectors and exposures increasing, businesses need more limit to be adequately protected,” he said. “They certainly need the latest policy forms and coverages to be adequately protected.”
That pressure for higher limits is unfolding alongside a broader reassessment of how cyber policies interact with professional risk, particularly as artificial intelligence becomes embedded in daily business workflows.
Limits rise as losses become harder to bound
For much of the past decade, cyber insurance buying has been driven by checklists focused on ransomware, business interruption, and breach response. Lea said the conversation has shifted toward scale and aggregation.
As AI-driven tools lower the barrier for attackers, losses move faster and spread wider. Voice cloning, automated phishing, and AI-enabled social engineering make it easier to exploit human behavior at speed. “Businesses need more limit,” Lea said, not only because attacks are more frequent, but because the financial consequences can escalate quickly.
At the same time, policy wording comes under greater scrutiny. Older forms often fail to contemplate newer attack vectors or emerging cost drivers, leaving insureds exposed in subtle but meaningful ways. As a result, coverage currency matters as much as price.
Social engineering shifts focus from systems to people
AI-enabled social engineering has emerged as one of the most acute risk areas, with Lea emphasizing that process discipline matters as much as technology.
“Carriers and brokers both play a role,” he said. Brokers often remain closest to clients, helping them understand how to present themselves as strong risks from an underwriting perspective. Carriers, meanwhile, increasingly support those conversations through pre-breach services and risk guidance.
Technology debt emerges as an underestimated exposure
One of the sharpest divides Lea observes in claims experience is between newer technology firms and more established organizations. Startups, he said, often benefit from a lack of technology debt.
“The customers that we deal with are small technology companies that don’t have any technology debt,” he said. “They’re in the cloud, they’re using the latest and greatest network protection and cybersecurity tools. They have fewer claims, and when they do have them, they’re not as big.”
Legacy systems tell a different story. Older networks and applications are harder and more expensive to defend, creating vulnerabilities that are difficult to quantify but materially affect cyber risk. Lea said many organizations underestimate the true cost of that exposure. “From a cybersecurity perspective they’re underestimating how much that technology costs,” he said. Updating infrastructure, he added, is as much a risk decision as a technology one.
AI complicates the boundary between cyber and professional risk
As AI becomes embedded in professional services, Lea warned that cyber insurance alone is often insufficient. “When a company is using AI in their professional services, the coverage that cyber policies provide is actually pretty limited,” he said.
For firms delivering services using AI, errors and omissions coverage becomes critical. Lea said there is often “silent coverage” for AI under professional policies, but that silence creates uncertainty. Explicit exclusions remain relatively rare, but they do exist.
Embroker has responded by drafting an endorsement that makes AI-related coverage more explicit. “Companies that are offering AI within their professional services need to make sure they have a true professional liability policy in addition to cyber, and that it does not have AI exclusions,” Lea said.
Underwriting moves toward continuous monitoring
Looking ahead, Lea expects AI to play a growing role not only in risk creation, but in underwriting itself. Submission processes, monitoring tools, and risk signals are becoming increasingly automated. “There’s going to be more and more use of AI in underwriting,” he said, including expanded monitoring over the policy term.
For growth-stage companies, that shift carries practical implications. Understanding how insurers assess and monitor risk is becoming as important as purchasing coverage itself. In a market shaped by AI on both sides of the equation, the message is clear: protection depends on adequate limits, modern coverage, and a realistic view of how technology choices shape risk.
Related Stories
LATEST NEWS




