More

    Microsoft: OpenAI API moonlights as malware HQ

    Hackers have found a new use for OpenAI’s Assistants API – not to write poems or code, but to secretly control malware.

    Microsoft this week detailed a previously unseen backdoor dubbed “SesameOp,” which abuses OpenAI’s Assistants API as a command-and-control channel to relay instructions between infected systems and the attackers pulling the strings. First spotted in July during a months-long intrusion, the campaign hid in plain sight by blending its network chatter with legitimate AI traffic – an ingenious way to stay invisible to anyone assuming “api.openai.com” meant business as usual.

    According to Microsoft’s Incident Response team, the attack chain starts with a loader that uses a trick known as “.NET AppDomainManager injection” to plant the backdoor. The malware doesn’t talk to ChatGPT or do anything remotely conversational; it simply hijacks OpenAI’s infrastructure as a data courier. Commands come in, results go out, all via the same channels millions of users rely on every day.

    By piggy-backing on a legitimate cloud service, SesameOp avoids the usual giveaways: no sketchy domains, no dodgy IPs, and no obvious C2 infrastructure to block. 

    “Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment,” Microsoft said. “This threat does not represent a vulnerability or misconfiguration, but rather a way to misuse built-in capabilities of the OpenAI Assistants API.”

    Microsoft’s analysis shows the implant uses payload compression and layered encryption to hide commands and exfiltrated results; the DLL is heavily obfuscated with Eazfuscator.NET and is loaded at runtime via .NET AppDomainManager injection, after which the backdoor fetches encrypted commands from the Assistants API, decrypts and executes them locally, then posts the results back – techniques Microsoft describes as sophisticated and designed for stealth.

    For defenders, this is where things get messy. Seeing a connection to OpenAI’s API on your network doesn’t exactly scream “compromise.” Microsoft even published a hunting query to help analysts spot unusual connections to OpenAI endpoints by process name – an early step toward distinguishing genuine chatbot activity from malicious use.

    The Assistants API itself is scheduled for deprecation in August 2026, which may close this particular loophole. But the pattern is here to stay: if it’s cloud-hosted and trusted, it’s fair game. 

    Microsoft hasn’t said who’s behind the campaign, but noted that it shared its findings with OpenAI, which identified and disabled an API key and account believed to have been used by the attackers.

    OpenAI didn’t respond to The Register‘s request for comment. 

    In an age where everything from HR chatbots to help-desk scripts talks to an API, this won’t be the last time a threat actor turns your favorite cloud tool into their getaway car. ®

     

    Latest articles

    Related articles