Latest Job Opportunities in India
Discover top job listings and career opportunities across India. Stay updated with the latest openings in IT, government, and more.
Check Out Jobs!Read More
Targeting Chinese, North Korean and Russian hackers
Post views: 27
Cyberattacks using ChatGPT disrupted by OpenAI: Targeting Chinese, North Korean, and Russian hackers
OpenAI terminated three sets of activities on Tuesday for misusing its AI tool ChatGPT to help create malware.
Among them is a Russian-speaking threat actor who allegedly used a chatbot to help create and enhance a remote access Trojan (RAT), a credential-stealing program designed to evade detection. Additionally, the operator has prototyped and debugged technical elements that facilitate credential theft and post-exploitation using multiple ChatGPT accounts.
“These accounts appear to be affiliated with Russian-speaking criminal groups, as we have observed them posting evidence of their activities in a Telegram channel dedicated to these actors,” OpenAI stated.
Although the threat actor’s direct requests for malicious material were rejected by the AI company’s large language models (LLMs), they circumvented the limitations through build code that was later compiled to create the workflow.
Code obfuscation, clipboard monitoring, and simple tools for filtering data using a Telegram bot were among the outputs generated. It is important to note that none of these results in and of themselves are necessarily harmful.
“The threat actor presented a mix of high- and low-complexity requests: many claims required deep knowledge of the Windows platform and frequent debugging, while other required automated commodity tasks (such as mass password generation and scripted function applications),” OpenAI said.
The launcher repeated the same code throughout the conversations and used a limited number of ChatGPT accounts, which is consistent with continuous development rather than sporadic testing.

The second set of activities came from North Korea and overlapped with the campaign described by Trellix in August 2025 that used phishing emails to spread the Xeno RAT at diplomatic missions in South Korea.
According to OpenAI, the actors worked on specific projects like creating macOS Finder extensions, setting up a Windows Server VPN, or changing Chrome extensions to their Safari counterparts. The group also used ChatGPT to create the malware and create Command and Control (C2).
Furthermore, the threat actors were discovered using the AI chatbot to create phishing emails, test cloud services and GitHub features, investigate ways to enable password theft, DLL loading, in-memory execution, and Windows API hooking.
According to OpenAI, a group tracked by Proofpoint under the name UNK_DropPitch (aka UTA0388), a Chinese hacker group known for phishing campaigns targeting major investment firms with a focus on the Taiwanese semiconductor industry, and a backdoor known as HealthKick (aka GOVERSHELL) was involved in interactions with the third group of banned accounts.

In addition to helping with tools to speed up routine tasks like remote execution and protecting traffic with HTTPS, accounts used the tool to create content for phishing campaigns in English, Chinese, and Japanese. They also looked for information on installing open source tools such as fscan and nuclei. OpenAI described the threat actor as “technically competent but not sophisticated.”
In addition to these three nefarious online actions, the company also banned accounts that were being used for influence and fraud operations.
- ChatGPT is being abused by networks likely based in Nigeria, Myanmar and Cambodia in an attempt to defraud people online. These networks created social media content to promote investment plans, translated messages, and wrote messages using artificial intelligence.
- ChatGPT is said to be used by people associated with Chinese government organizations to help analyze data from Chinese or Western social media platforms and monitor individuals, especially members of ethnic minorities such as the Uyghurs. Users did not use the AI-powered chatbot to execute the promotional materials they requested from the tool.
- A threat actor with Russian roots was linked to Stop News and may have been run by a marketing company that created videos and materials for social media platforms using its own AI models, among other things. The information produced was critical of the Russian presence in Africa as well as the roles played by the United States and France. Additionally, it created English-language content promoting anti-Ukrainian narratives.
- Using its models, a Chinese-led covert influence operation known as the “Nine Amdash Line” produced social media posts criticizing Philippine President Ferdinand Marcos, Vietnam’s alleged environmental impact in the South China Sea, and political figures and activists associated with the pro-democracy movement in Hong Kong.
In two cases, suspected Chinese accounts asked ChatGPT to identify the funders of an X account that was critical of the Chinese government and petition organizers in Mongolia. According to OpenAI, their models did not produce any sensitive data; Instead, they only produced responses that were publicly available.
“A new request to this (China-linked influence network) sought advice on social media growth strategies, such as how to start a TikTok challenge and get others to post content around the hashtag #MyImgratigStory (a long-widely used hashtag whose popularity the operation may have sought to capitalize on),” OpenAI said.

“They asked our model to think about and then generate text for a TikTok post, as well as provide recommendations for background music and images to accompany the post.”
OpenAI reiterated that its tools were used to add additional efficiency to threat actors’ existing workflows and gave them new capabilities they could not otherwise gain from a variety of publicly available web resources.
However, one of the most interesting findings in the report is that threat actors are trying to adjust their strategies in order to remove any indicators that the content has been produced using artificial intelligence.
“One scam network (from Cambodia) that we boycotted asked our model to remove em dashes (-) from its output, or appears to have removed dashes immediately before publishing,” the company stated. “For months, dashes have been a topic of discussion online as a potential sign of AI use: this case suggests that threat actors were aware of that discussion.”
The OpenAI findings coincide with competitor Anthropic’s launch of Petri (short for “Parallel Exploration Tool for Risky Interactions”), an open-source auditing tool designed to accelerate AI safety research and gain a better understanding of typical behavior in a number of areas, including self-perseverance, and collaboration with requests. Harmful, deceive, flatter, and encourage the user’s delusion.
“Petri deploys an automated agent to test a target AI system through several multi-turn conversations involving users and simulation tools,” Anthropic said.
“A list of prime instructions targeting the situations and behaviors that researchers wish to examine is presented to Petri. Each prime instruction is then processed in parallel by Petri. The auditor agent plans and participates in a tool use loop with the target model for each prime instruction. To enable researchers to quickly explore and filter for the most interesting texts, the judge ultimately assigns points to each of the generated texts based on a variety of Standards.
About the author:
Yogesh Nagar He is a content marketer specializing in the cybersecurity and B2B space. Besides writing for News4Hackers blogs, he also writes for brands including Craw Security, Bytecode Security, and NASSCOM.
Read more:
EBS Zero-day was exploited in a Clop Data Theft Attack that was patched by Oracle
About the author
Cyberattacks Using ChatGPT Disrupted by OpenAI: Chinese, North Korean, and Russian Hackers Targeted



