More

    Cyber experts’ message on AI threats rings louder after Canadian Tire breach

    Canadian Tire’s breach highlighted why companies must adopt active cyber defence and deterrence strategies

    Cyber experts' message on AI threats rings louder after Canadian Tire breach


    Cyber

    By


    The importance of proactive cyber defence was underscored this week after Canadian Tire confirmed a breach affecting customers of its e-commerce platforms, including SportChek, Mark’s/L’Équipeur and Party City. The incident has reignited calls for companies to shift from passive protection to active deterrence – a theme that dominated discussion at the National Insurance Conference of Canada (NICC).

    Artificial intelligence is forcing insurers, regulators, and cybersecurity experts to rethink how they define and manage risk. At a panel during NICC, leaders in AI, law, and cybersecurity explored how organizations can shift from reacting to attacks to preventing them – and how regulation and insurance products must evolve to meet the challenge.

    From defence to deterrence

    Luigi Lenguito (pictured centre), CEO of BforeAI, said the industry must change its mindset about how it approaches cyber risk in the age of artificial intelligence.

    “It’s critical to move away from fighting attacks after they happen,” Lenguito said. The cost of these attacks, once they touch a company, is already massive, he said. The objective, he added, has to be creating a context in which the criminal does not want to attack in the first place.

    That, he explained, means focusing on cyber deterrence rather than reaction – increasing the cost and complexity of attacks and forcing adversaries to think twice before launching them.

    One known lever is increasing the cost of attack – making it more cumbersome, more expensive, and creating more defences, Lenguito said.

    Some companies, and even countries, are now looking at how to actively deter criminal operations instead of just enduring them, he said.

    He warned that the industry can no longer afford a “wait and see” attitude. “We need to pass the point of being victims,” he said. “We can’t just wait and see what happens. We have to prepare and be ready before the attack starts.”

    Hard targets, not soft ones

    Paul Caiazzo (pictured centre left), chief threat officer at Quorum Cyber, said most organizations still underestimate how artificial intelligence could reshape the threat landscape – including internally, through what he calls “shadow AI.”

    “One of the most important things we can do is make ourselves hard targets,” Caiazzo said. Incident response tabletop exercises are not new, but most organizations haven’t incorporated attacks against their own AI systems into them, he explained.

    He urged companies to develop response plans that include AI misuse or compromise, such as a corrupted large language model (LLM) or manipulated internal algorithm. “Even if you’re just dipping your toes into AI, you should have some notion of what the worst outcome could look like and measure that in your tabletop exercises,” he said.

    Beyond planning, Caiazzo observed that attackers are getting more sophisticated – and more organized. “We’re seeing consolidation among adversaries,” he said, referencing cybercriminal groups such as Scattered Spider, Cl0p, and ShinyHunters, which recently collaborated in high-profile breaches.

    They are very well resourced and they’re working together, he said. “That’s something we could learn from – more collaboration and information sharing among victims of cybercrime can help us all.”

    Preparing for AI-enabled threat actors

    Both speakers agreed that attackers’ use of AI will make their campaigns faster, more adaptive, and harder to detect. Caiazzo warned that AI-assisted phishing, deepfake-enabled scams, and data exfiltration tools are becoming smarter by the week.

    Phishing attacks are going to get better and more frequent, he said. “That means our users need to be better trained to recognize them – and to act when they do.”

    Lenguito added that AI can also be used defensively, to identify threats before they materialize. He cited new prescriptive analytics and predictive AI tools capable of mapping criminal infrastructure before attacks begin.

    He noted that this proactive approach not only reduces losses but also builds trust between companies and their clients. “The cost of managing risk before it happens is much less than managing it after the fact,” Lenguito said.

    The regulatory race

    Turning to regulation, Nathalie David (pictured right), partner at Clyde & Co., said Canada remains behind Europe in setting comprehensive rules for AI, but the need for clarity is urgent.

    There’s a general recognition that Canadian legislators should establish a clear AI regulatory framework, she said.

    “We were one of the first to put forward this kind of legislation, but it was sidestepped. Clearly, we need to get back to it as soon as possible.”

    David suggested that a risk-based approach, similar to the European Union’s AI act, would help balance innovation with accountability. It should focus on prohibited or high-risk systems and avoid overburdening general-use applications that carry lower risks, she said.

    However, she warned that future laws must remain adaptable as technology evolves. It’s important to protect personal information and guard against bias, but we also need to allow space for innovation, David said. “Canada should align with international standards where possible, particularly those of the EU.”

    AI insurance will follow the path of cyber coverage

    Michael Berger (pictured centre right), head of AI insurance at Munich Re, said the evolution of AI will inevitably lead to a new class of insurance products – purpose-built to handle the unique characteristics of AI risk.

    “Given the need for specialized expertise and new methods, I believe we’ll see AI insurance develop much like cyber insurance did,” Berger said. “Certain AI risks require distinct evaluation and control frameworks – that means they’ll need their own standalone AI insurance programs.”

    Berger said the challenge is twofold: understanding the probabilistic nature of AI errors and managing potential aggregation risks tied to foundational models used across industries.

    To address these challenges, Berger said Munich Re has built teams combining actuarial, mathematical, and data science expertise – hiring PhDs directly from academia to quantify AI error rates and model hallucination risk. “We need to develop new methodologies,” he said. “The same statistical assumptions we’ve used for decades won’t hold for AI.”

    Related Stories

    Fetching comments…

    Please enable JavaScript to view the comments powered by Disqus.

     

    Latest articles

    Related articles