Latest Job Opportunities in India
Discover top job listings and career opportunities across India. Stay updated with the latest openings in IT, government, and more.
Check Out Jobs!Read More
🔥 North Korean infiltrators use Chatgpt to make Deepfake’s identifiers to photograph South Korea military –
explained
Publishing views: 39
North Korean infiltrators use Chatgpt to make Deepfake’s identifiers to photograph the South Korean army
“The South Korean Defense Agency attempted by North Korea’s Korea Slatves using the DeepFake identifiers created from artificial intelligence has attracted attention to the increasing attack on artificial intelligence in electronic threats targeting national security.”
According to a report released on Monday, the gang of infiltrators with North Korea used the DeepFake photos created by artificial intelligence (AI) to launch an electronic attack on South Korea organizations, including those that deal with the defense.
According to the investigation conducted by the Genians (Genians), a security center in South Korea, the Kimsuky gang, a group of infiltrators believed to be supported by the North Korean government, under the spear attack on a military -related organization, according to Yonhap News Agency.
Spear hunting is a type of targeted penetration that is frequently implemented by sending individual email messages that simulate reliable sources.
a report
| Denial as correspondence with regard to issuing identity documents to officials who have military affiliations, attackers sent an email attachment containing harmful programs. It was believed that the image of the attempt card was created by the AI ​​Model, which indicates the use of DeepFake technology by the Kimsuky collection. As part of a greater plan to overcome international sanctions and obtain foreign currencies of the system, employees have created fake virtual identities in order to pass technical tests during the recruitment process. According to GSC, these incidents show North Korea’s expanding efforts to use artificial intelligence services for more complex and malicious purposes. “Although artificial intelligence services are effective tools for increasing productivity, they also pose potential risks when they are offended as Internet threats at the level of national security.” “As a result, companies need to be ready to misuse artificial intelligence and monitor security during employment, operations and work procedures.” |

Since the government’s identity documents are legally protected, artificial intelligence systems such as Chatgpt usually deny requests to create repetitions of military identity.
But according to the GSC report, it appears that the infiltrators have exceeded the borders by requesting models or sample designs for “legitimate” purposes instead of the accurate symmetrical copies of real identifiers.
The results come after a different report showing in detail the misuse of artificial intelligence by North Korea IT employees in August by the United States -based Anthropor, the company that developed the artificial intelligence service Claude.
About the author
Suraj Cole He is a content specialist in technical writing about cybersecurity and information security. He wrote many amazing articles on cybersecurity concepts, with the latest trends in electronic awareness and ethical piracy. Learn more about “him”.
Read more:
WhatsApp vs. India’s HRUSTOM: Conflict about the strength of the market and data privacy
About the author
👉 Read more at: Source
Tags: #North #Korean #infiltrators #Chatgpt #Deepfakes #identifiers #photograph #South #Korea #military
Authored by daksh kataria on 2025-09-15 12:08:00
From:
🔥 North Korean infiltrators use Chatgpt to make Deepfake’s identifiers to photograph South Korea military –
uncovered
Publishing views: 39
North Korean infiltrators use Chatgpt to make Deepfake’s identifiers to photograph the South Korean army
“The South Korean Defense Agency attempted by North Korea’s Korea Slatves using the DeepFake identifiers created from artificial intelligence has attracted attention to the increasing attack on artificial intelligence in electronic threats targeting national security.”
According to a report released on Monday, the gang of infiltrators with North Korea used the DeepFake photos created by artificial intelligence (AI) to launch an electronic attack on South Korea organizations, including those that deal with the defense.
According to the investigation conducted by the Genians (Genians), a security center in South Korea, the Kimsuky gang, a group of infiltrators believed to be supported by the North Korean government, under the spear attack on a military -related organization, according to Yonhap News Agency.
Spear hunting is a type of targeted penetration that is frequently implemented by sending individual email messages that simulate reliable sources.

a report
| Denial as correspondence with regard to issuing identity documents to officials who have military affiliations, attackers sent an email attachment containing harmful programs. It was believed that the image of the attempt card was created by the AI ​​Model, which indicates the use of DeepFake technology by the Kimsuky collection. As part of a greater plan to overcome international sanctions and obtain foreign currencies of the system, employees have created fake virtual identities in order to pass technical tests during the recruitment process. According to GSC, these incidents show North Korea’s expanding efforts to use artificial intelligence services for more complex and malicious purposes. “Although artificial intelligence services are effective tools for increasing productivity, they also pose potential risks when they are offended as Internet threats at the level of national security.” “As a result, companies need to be ready to misuse artificial intelligence and monitor security during employment, operations and work procedures.” |

Since the government’s identity documents are legally protected, artificial intelligence systems such as Chatgpt usually deny requests to create repetitions of military identity.
But according to the GSC report, it appears that the infiltrators have exceeded the borders by requesting models or sample designs for “legitimate” purposes instead of the accurate symmetrical copies of real identifiers.
The results come after a different report showing in detail the misuse of artificial intelligence by North Korea IT employees in August by the United States -based Anthropor, the company that developed the artificial intelligence service Claude.
About the author
Suraj Cole He is a content specialist in technical writing about cybersecurity and information security. He wrote many amazing articles on cybersecurity concepts, with the latest trends in electronic awareness and ethical piracy. Learn more about “him”.
Read more:
WhatsApp vs. India’s HRUSTOM: Conflict about the strength of the market and data privacy
About the author
📌 Read more at: Full Article
Explore more: #North #Korean #infiltrators #Chatgpt #Deepfakes #identifiers #photograph #South #Korea #military
Written by daksh kataria on 2025-09-15 12:08:00
Via




