More

    AI platforms open new route for malware campaigns

    Cybersecurity researchers warn that AI assistants with web access could play a new role in malware campaigns. Instead of connecting directly to a command-and-control server, attackers can use AI platforms as an intermediary for communication, making malicious traffic less likely to be detected.

    Research by security company Check Point shows that AI assistants such as Grok and Microsoft Copilot can be misused to transport commands and data between an infected system and an attacker’s infrastructure. The core of the problem is that these AI services can retrieve and summarize web pages, a function that is normally legitimate but can also be repurposed for abuse.

    In the proof of concept developed by the researchers, malware does not communicate directly with an external server, but with an AI assistant via a web interface. The malware instructs the AI to retrieve a specific URL controlled by the attacker. The page contains hidden instructions that the AI processes and returns in a response. The malware reads this response and extracts the actual commands or configurations from it.

    For this approach, the researchers use WebView2, a component in Windows 11 that allows web content to be displayed within an application without a full browser. Even if WebView2 is not present on the target system, according to Check Point, it can still be included in the malware. The researchers built a C++ application that opens a WebView to Grok or Copilot, enabling interaction with the AI.

    The result is a two-way channel via an AI service that many security solutions consider reliable. This allows data traffic to slip past existing filters, reports BleepingComputer. An additional advantage for attackers is that this method does not require an account or API key, making it more difficult to stop abuse by revoking access or blocking accounts.

    According to Check Point, this is normally the weak point when exploiting legitimate command-and-control platforms. Accounts can be closed and keys revoked, quickly rendering the infrastructure unusable. When interacting directly with an AI agent via a web page, this possibility largely disappears, especially when anonymous use is allowed.

    Although AI platforms have security mechanisms to block clearly malicious interactions, the researchers argue that these are relatively easy to circumvent. By encrypting data and packaging it as high-entropy content, the malicious payload can remain undetected.

    Check Point emphasizes that this is just one example of how AI services can be abused. More broadly, the researchers see opportunities where AI not only acts as a conduit but also supports decisions, such as assessing the value of a target or determining next steps without raising alarm bells.

    Microsoft has been notified of the proof of concept by the researchers. A spokesperson said the company appreciates the report and emphasized that attackers on compromised systems will always try to exploit available services, including AI-based services. According to Microsoft, the emphasis is therefore on layered security, with measures designed to both prevent infections and limit the impact after a breach.

     

    Latest articles

    Related articles