OpenAI Stops Global Hackers Misusing ChatGPT
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks.
Oct 8, 2025
eSecurity Planet content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
OpenAI has taken recent action against cybercriminals exploiting its ChatGPT platform to aid in cyberattacks.
The company announced that it had disrupted three coordinated hacking efforts originating from Russia, North Korea, and China, which used the chatbot for malicious purposes, including malware creation and phishing campaigns.
“These accounts appear to be affiliated with Russian-speaking criminal groups, as we observed them posting evidence of their activities in a Telegram channel dedicated to those actors,” OpenAI stated in its October 2025 update.
The dark side of generative AI
This discovery underscores the dual-use risk of large language models (LLMs) like ChatGPT and others as powerful tools that can be harnessed for both legitimate productivity gains and cybercrime.
The misuse of generative AI for malware development, social-engineering campaigns, and information theft represents a growing concern for security teams and policymakers.
The first disrupted group was a Russian-language actor developing a remote access trojan (RAT) and credential stealer.
Although ChatGPT refused to generate explicitly malicious code, the actor used it to create modular code snippets — such as obfuscation scripts and data-exfiltration routines — that were later combined into functional malware.
A North Korean cluster, reportedly linked to a campaign identified by Trellix in August 2025, used ChatGPT to support malware and command-and-control (C2) infrastructure.
These actors experimented with Windows and macOS configurations, phishing templates, and API hooking techniques to target diplomatic and government entities.
The third cluster, associated with the Chinese group UNK_DropPitch, leveraged ChatGPT to craft multilingual phishing content and develop backdoors aimed at the Taiwanese semiconductor sector. According to OpenAI, this group was “technically competent but unsophisticated.”
AI supercharges old attack tactics
Each actor sought to use ChatGPT to accelerate existing attack workflows rather than invent new offensive capabilities.
The Russian operators iterated on their code across multiple ChatGPT sessions to refine RAT functionality. North Korean hackers used the AI to simulate phishing lures and automate malware deployment scripts.
Meanwhile, Chinese-linked accounts generated phishing messages in English, Chinese, and Japanese and searched for ways to automate remote execution through HTTPS channels.
Outside these three major groups, OpenAI also identified smaller networks from Cambodia, Myanmar, and Nigeria attempting to use ChatGPT for online scams and influence campaigns.
Some accounts tied to Chinese state-linked operations reportedly tried to analyze social-media data and generate propaganda materials about ethnic minorities and geopolitical issues.
Mitigating the AI misuse risk
OpenAI’s actions included banning the offending accounts, tightening monitoring systems, and sharing findings with security partners.
The company reiterated that its LLMs are built with layered safeguards to block malicious prompts, though persistent actors sometimes attempt to circumvent these restrictions through code-fragment generation.
To reduce exposure to AI-driven threats and strengthen overall security posture, organizations should implement the following measures:
- Monitor and detect threats: Track IOCs, RATs, and AI-driven attacks using EDR and SIEM tools.
- Enforce access controls: Apply least privilege, MFA, and network segmentation to limit lateral movement.
- Harden systems: Secure development and endpoint environments, allowlist apps, and update dependencies.
- Govern AI use: Define AI policies, monitor activity, and log prompts for accountability.
- Train employees: Educate staff to spot suspicious phishing messages.
- Collaborate and test: Share threat intel and run AI-focused red-team and incident response drills.
OpenAI’s report highlights how malicious actors are “bolting AI onto old playbooks” to move faster, not necessarily to innovate.
As generative-AI capabilities become embedded across industries, defenders must assume that adversaries will use similar tools for reconnaissance, coding assistance, and psychological manipulation.
With attackers increasingly turning to AI for deception and manipulation, deepfake detection tools have become a critical line of defense.
Recommended for you…

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.
Company
Categories
Best Products Networks Cloud Threats Trends Endpoint Applications Compliance
Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on
this site are from companies from which TechnologyAdvice
receives compensation. This compensation may impact how and
where products appear on this site including, for example,
the order in which they appear. TechnologyAdvice does not
include all companies or all types of products available in
the marketplace.
Terms of Service Privacy Policy
California – Do Not Sell My Information
