This story was originally posted on MyNorthwest.com
Cyber experts around the world are sounding alarm bells after a team of researchers at Anthropic discovered hackers recently used AI to help run a cyberattack, marking what may be the first known case of AI directing a hacking campaign in a largely automated way.
Researchers at Anthropic recently revealed that the hackers, believed to be linked to the Chinese government, were able to manipulate their popular AI platform, Claude, to do their hacking for them.
According to Anthropic, the bad actors used “jailbreaking” techniques to trick Claude into bypassing safety guardrails by posing as legitimate cybersecurity workers. The result was that the hackers were able to conduct high-level, automated attacks to infiltrate roughly 30 tech, finance, chemical companies, and federal agencies on a scale not yet seen before.
“It’s a multi-stage attack that moves faster than any humans could, and it’s all instrumented by AI,” explained Cristin Flynn-Goodwin, a cybersecurity expert who spent two decades at Microsoft and now heads Advanced Cyber Law. “It’s really a game changer.”
Flynn-Goodwin told KIRO Newsradio the attack is catching the attention of cyber experts everywhere because of the potential for more attacks that could go largely unnoticed.
“Not only can nation-states do it, so can advanced criminals, and that will trickle down,” Flynn-Goodwin said. “We don’t have defenses for things like that today.”
In today’s world of AI, hackers have the upper hand. AI gives them unlimited speed and scale to perform cyberattacks. The lack of laws and other guardrails from tech companies and the government also allows hackers to take risks because there is little chance of them getting caught.
On the other side, legitimate companies developing AI platforms are forced to move carefully, lawfully, and selectively.
“AI and Agentic AI are evolving so quickly that imagine we’re building the roads at the same time that we’re learning how to drive,” Flynn-Goodwin said.
To solve that dilemma, governments and the private sector need more time to develop standards and controls to install checks and balances on what bad actors can and can’t do. However, not only do those not yet exist, right now, AI development is wrapped in a culture that champions innovation to win and dominate AI marketplaces. An AI company that slows its progress to allow rules and regulations to catch up would certainly lose the AI race.
In the meantime, Flynn-Goodwin warns that more attacks will happen, and hackers will become more and more sophisticated by using AI.
“When we used to get spam or phishing, there were spelling errors, there were mistakes, things that would trick our eyes and let us know this isn’t right. Those are all going to be gone, AI is going to get rid of that, so now that the defenses that we used to have, the human defenses are gone,” Flynn-Goodwin said. “The actors are going to take this back, they’re going to learn, they’re going to get better, and we’re going to see more of this.”
Follow Luke Duecy on X. Read more of his stories here. Submit news tips here.
©2026 Cox Media Group
