Report: AI Drives Cyber Attacks That Unfold in Minutes

Artificial intelligence is speeding up timelines for cyber attacks, a new report has found, creating what the authors call a widening “cybersecurity speed gap” between bad actors and defense efforts.

The report from Booz Allen Hamilton, published this month, shows that cyber criminals are now moving from initial access to broader system compromise in less than 30 minutes on average — and sometimes in seconds. And attackers are using AI as a collaborator in their speedier attacks. For example, AI is helping cyber criminals quickly create realistic phishing emails, research multiple targets in minutes and write malicious code even if they lack coding skills. It’s also enabling small groups to carry out campaigns that used to require larger, coordinated groups.

As a result, human defenders are struggling to keep pace with the speedy new landscape of AI-powered cyber threats.

Manycybersecurityprocesses — from alert triage to incident response — depend on human decision-making that can take days to weeks due to manual approvals, alert backlogs and other factors. That pace is no longer realistic for staying ahead of criminals, the report found.

The report also discusses barriers to entry, which have significantly dropped now that criminal organizations can code with AI tools, test exploits and refine attacks in “rapid cycles,” sharing these capabilities across their ecosystems. At the same time, AI adoption has greatly expanded the attack surface because it means that there are more platforms and workflows to target. One concern is that attackers are embedding hidden instructions in emails, documents or webpages that can manipulate AI systems or influence how they behave.

To close the speed gap, the report outlines several shifts for cybersecurity teams. First, containment should begin immediately through preapproved, automated actions that can occur while an intrusion is unfolding.

“Organizations should prioritize tools that enable automated containment, enforce policy at scale, and provide auditability for every automated decision,” the report reads.

Zero-trust frameworks are also recommended, and AI platforms should be secured as critical infrastructure because they hold sensitive data, connect to multiple systems, integrate with code depositories and trigger actions in multiple ways. Finally, live cyber analysts who oversee many AI functions — referred to as a human-AI teaming model — would multiply defense capabilities, speeding up detection and mitigation, which would allow for intervention and refinement where needed.

 

Latest articles

Related articles