More

    AI exclusions are creeping into insurance – but cyber policies aren’t the issue yet

    Ambiguous AI exclusions are raising new questions for D&O, E&O and management cover

    AI exclusions are creeping into insurance - but cyber policies aren’t the issue yet


    Cyber

    By


    AI-related exclusions may be spreading across commercial insurance – but they’re not yet disrupting cyber policies. Alexandra Bretschneider (pictured), vice president and cyber practice leader at Johnson Kendall Johnson, said while the market has seen some carriers adding AI-specific exclusions, most cyber insurers have taken the opposite approach.

    “We have seen, from a cyber insurance standpoint, some insurance carriers come outright to add AI endorsements to clarify they’re still intending to cover losses that are initiated by an AI threat actor,” she said.

    That clarity matters. As AI-fueled attacks – particularly deepfakes and social engineering fraud – become more sophisticated, cyber policies remain one of the few places where insurers appear to be reinforcing coverage, not retreating from it.

    Bretschneider noted that the threat of AI doesn’t create new exposures so much as it amplifies old ones. “AI creates a murkiness of new exposures that, really, to me, are not a far cry from some of the silent cyber risks we were experiencing to begin with,” she said. “The insurance industry still does need to continue to resolve” where digital events like outages, cyberattacks, or AI-enabled malfunctions fit – particularly when they cause physical damage or bodily injury.

    The bigger concerns lie outside of cyber

    Where things get more complicated is in lines like management liability and professional indemnity. Here, some carriers are introducing broad-based AI exclusions, often without meaningful definitions. Bretschneider cited the Berkley exclusion as an example: “They err on the side of an extremely broad definition of not only AI itself, but how it’s being utilized,” she said. “Any dependence upon any utilization of AI is really where they exclude.”

    That language is now showing up in D&O, E&O, employment practices, fiduciary and crime cover. And it’s the scope – not just the presence – of these exclusions that’s raising red flags.

    Still, she cautioned that clients shouldn’t panic about their current cyber protection. “Organizations today do not need to panic that their coverage is in a position to deny an AI-related claim if it’s for something that is already traditionally intended to be covered by the policy,” she said.

    Standalone AI coverage? Not likely

    While some insurers appear to be walking back coverage, Bretschneider didn’t foresee AI becoming its own distinct line of insurance in the way cyber did. “It’s not a far reach from cyber risk in so many ways,” she said. “There’s absolute regulatory concerns… there’s privacy concerns… there’s this kind of, what we would call today, the silent cyber piece of bodily injury and property damage.”

    She argued that the industry already has the framework to handle AI exposures – what’s missing is alignment and modernization. “There’s all these things that we already have the bones to cover, that why not adapt what we already have to allow for it,” she said.

    While a blended product could eventually emerge, Bretschneider expected most risk to be absorbed into existing lines, rather than splintering off into a standalone AI category. One possible exception: AI developers, who may need tailored E&O coverage to reflect their unique exposure.

    Coverage gaps still wide open

    Currently, practical insurance solutions tailored to AI remain scarce. “There are a handful, and I truly mean probably fewer than five fingers’ worth of AI-specific products out on the market,” Bretschneider said. One of the few available is Armilla AI, which is designed to address certain legal and financial harms.

    However, Bretschneider noted that such products generally do not extend to every possible risk. In the case of Armilla AI, for example, it does not cover bodily injury or property damage – leaving major exposure areas, particularly those tied to operational failures or flawed decision-making, largely unaddressed.

    Renewals triggering tougher internal conversations

    AI exclusions, even when not fully understood, are forcing more policyholders to reassess their risk frameworks. “It’s absolutely prompted discussion in an appropriate risk management way,” Bretschneider said.

    Conversations now extend into acceptable use policies, access restrictions, and employee conduct. “It’s really a proceed-with-caution advisory tale,” she said. “Technology innovation is a wonderful, beautiful thing. AI or otherwise. It just needs to be done with a few guard rails in place.”

    So far, those conversations are happening inconsistently across the market – but she expects that to change as exclusions become more prevalent and AI-related litigation increases.

    No distinction between AI and human attackers

    When it comes to underwriting cyber threats, Bretschneider dismissed the idea that insurers should distinguish between human and AI-originated attacks. “There does not seem to be value in creating an exclusion to say, hey, we’ll cover you if a human fools you, but not if you’re fooled by a deepfake,” she said.

    From a coverage perspective, what matters is impact – not source. Insurers are more likely to scrutinize an organization’s verification systems than the technology used to breach them. “What validation processes do you have in place to make sure the person on the other line is who they say they are?” she said.

    Focus on processes, not just policies

    As AI tools are deployed across business units, Bretschneider expects to see the same process-oriented scrutiny spread into D&O and E&O underwriting.

    “Fine, if you want to utilize AI, but what’s your process for validating your reliance upon the results?” she said. Just as underwriters want to see callback procedures before covering vendor payment fraud, they’ll expect to see documented quality controls for AI-based decisions.

    The same logic applies to employee misuse. If an employee uploads personally identifiable or health information to a public AI tool, that will likely be considered an unauthorized disclosure – and a potential breach. “So, as of right now, again, not seeing a coverage issue there today,” she said, assuming policy and regulatory conditions are met.

    Related Stories

     

    Latest articles

    Related articles