HSB flags ‘silent’ exposure across the sector

A new frontier of liability risk is rapidly taking shape as small and mid-sized businesses (SMEs) accelerate their adoption of artificial intelligence. Experts have flagged a new wave of “silent” AI liability exposures, echoing the early days of cyber risk.
According to HSB, part of Munich Re, widespread AI adoption among SMEs is already outpacing both risk awareness and insurance clarity. Internal data show that 74% of small businesses use AI tools today, while 91% expect to adopt them in the near future.
Timothy Zeilman (pictured), global head of product ownership at HSB, said the parallels with cyber risk are hard to ignore. One of the defining features of early cyber risk was its “invisibility” within traditional insurance policies. Zeilman believes AI is now following a similar trajectory.
“We’re seeing the same pattern we saw with cyber 15 or 20 years ago,” he said. “Adoption is happening very quickly, but the understanding of how that translates into insured risk is still catching up.”
‘Silent’ AI risks creeping up
AI-related exposures are increasingly embedded within existing lines such as professional liability, directors and officers (D&O), and general liability. However, they are often not being explicitly addressed in policy wordings. This creates a period of ambiguity for both insurers and policyholders, in which coverage may exist unintentionally and pricing may not reflect the true level of risk.
“In many cases, these risks weren’t contemplated when policies were written or priced,” Zeilman pointed out. “But in the absence of exclusions, they may still be picked up by those policies.”
That ambiguity is beginning to shift. The Insurance Services Office (ISO) has introduced new exclusions targeting AI-related losses, which became effected at this start of 2026. As carriers adopt or adapt these exclusions, the industry is moving toward a more defined (though potentially more restrictive) coverage landscape as it “forces a clearer conversation about what is and isn’t covered,” Zeilman said.
Non-physical risks dominate early claims outlook
While AI’s long-term risk profile is still evolving, early indicators suggest that non-physical harms will drive the majority of claims… at least in the near term.
These include intellectual property and reputational risks linked to generative AI, according to Zeilman. He said SMEs are increasingly using tools like chatbots and content generators for marketing, social media, and customer engagement. However, these tools can produce outputs that inadvertently infringe on copyright, include defamatory statements, or misuse personal data.
“It’s very easy for a business to publish something generated by AI without realizing it contains infringing or problematic content, which creates a clear pathway to liability,” Zeilman said.
Reputational damage is closely tied to this risk. AI systems can “hallucinate” or generate inaccurate information, potentially leading to public-facing errors that harm a company’s brand.
In contrast, physical risks associated with AI (such as bodily injury or property damage) are developing at a slower pace but remain a growing concern. These risks typically require a bridge between digital decision-making and real-world action, such as robotics or smart building systems. Examples include delivery robots causing pedestrian injuries, AI-driven equipment malfunctioning, or automated systems making incorrect decisions in critical environments like fire detection or security.
“Some of these technologies are still in early adoption,” Zeilman said. “But it’s not hard to imagine scenarios where AI-driven systems lead to real-world losses.”
A shifting AI liability landscape
The evolution of AI risk is also prompting broader questions about responsibility and legal accountability. Current high-profile litigation has largely focused on developers of AI models, particularly in areas such as copyright infringement. However, Zeilman noted that businesses using these tools could increasingly find themselves drawn into disputes.
“If a company is deploying an AI-powered chatbot or using AI-generated content, they may share liability if something goes wrong,” he said. “It’s not just the developers who could be exposed.”
Other emerging areas of concern include claims related to harmful or manipulative AI behavior, such as allegations of addictive or misleading systems.
From an underwriting perspective, AI presents a challenging risk profile. While insurers can begin to model potential severity, particularly for large reputational or intellectual property claims, the frequency of losses remains highly uncertain. “We’re still in the early stages of understanding how often these events will occur,” Zielman said.
HSB has launched an AI liability insurance product aimed squarely at this evolving exposure. The product is designed to offer affirmative coverage for AI-related incidents involving bodily injury, property damage, and personal or advertising injury, and reflects a broader industry shift, said Zeilman.
“As carriers start to adopt those exclusions or design their own similar exclusions, we will see a hole where there is no longer coverage for these kinds of events,” he said. “Ideally, we would work with companies that decide to implement those exclusions but then want to write back affirmative coverage for their customers, so their customers are not left with a gap in their insurance coverage.”
Related Stories
LATEST NEWS





