North Korean agents using AI to trick western firms into hiring them, Microsoft says | Technology sector

by Chief Editor

North Korea’s AI-Powered Job Scam: A New Era of Cybercrime

North Korean agents are increasingly leveraging artificial intelligence to infiltrate Western companies, not through traditional hacking, but by securing legitimate employment. This sophisticated scheme, detailed in recent reports by Microsoft and outlined by security firm Validin, represents a significant evolution in state-sponsored cybercrime, moving beyond data breaches to long-term, financially motivated operations.

The Rise of AI-Enhanced Impersonation

For years, North Korea has employed remote IT workers to generate revenue for the regime. However, the latest tactics, dubbed “Jasper Sleet” and “Coral Sleet” by Microsoft’s threat intelligence unit, demonstrate a marked increase in sophistication thanks to AI. Scammers are now using AI-powered tools to create convincing fake identities, alter stolen IDs, and even mask their accents during video interviews.

The process involves crafting “culturally appropriate” names and email addresses, as Microsoft’s research shows, using AI prompts like “create a list of 100 Greek names.” Voice-changing software is employed to overcome accent barriers, and AI image generation tools, such as Face Swap, are used to create professional-looking headshots for CVs and insert faces into falsified identification documents. This allows applicants to appear as local candidates, increasing their chances of securing remote IT positions.

Beyond Hiring: Exploitation and Extortion

Once employed, these individuals don’t simply collect a paycheck. They funnel wages back to Pyongyang, contributing to the regime’s weapons programs. There’s a growing risk of data exfiltration and extortion. Reports indicate that some operatives have threatened to release sensitive company data if terminated, adding another layer of complexity to the threat.

Microsoft’s investigation revealed that these remote workers utilize AI to assist with their daily tasks, including writing emails, translating documents, and generating code. This helps them maintain the facade of a legitimate employee and avoid detection for longer periods.

The Financial Impact and Government Response

The scale of this operation is substantial. In June 2025, the U.S. Department of Justice announced nationwide enforcement actions targeting these illicit revenue generation schemes, dismantling 29 laptop farms and 200 computers used by North Korean IT workers. Despite these efforts, the problem persists, fueled by the accessibility and affordability of AI tools.

Protecting Your Organization: A Multi-Layered Approach

Companies are being urged to adopt a more rigorous hiring process, particularly for remote IT positions. Microsoft recommends conducting interviews via video call or, ideally, in person, to assess candidates for “tells” indicative of deepfake technology – such as pixellation around facial features or inconsistencies in lighting.

However, technology alone isn’t enough. A comprehensive security strategy must include robust identity verification procedures, continuous monitoring of employee activity, and employee training to recognize and report suspicious behavior.

Future Trends: What to Expect

The use of AI in this type of cybercrime is likely to escalate. As AI technology becomes more sophisticated and accessible, we can anticipate:

  • More Realistic Deepfakes: The quality of AI-generated videos and images will continue to improve, making it increasingly demanding to detect fraudulent candidates.
  • Automated Application Processes: AI could be used to automate the entire job application process, from resume creation to interview scheduling, further streamlining the scam.
  • Targeted Attacks: Scammers may focus on specific companies with valuable intellectual property or access to sensitive data.
  • Expansion to Other Industries: While currently focused on IT, this tactic could be adapted for other professions requiring remote work and access to sensitive information.

FAQ

What is “Jasper Sleet”?
Jasper Sleet is the moniker cybersecurity analysts have given to a cluster of North Korean groups deploying remote IT workers who are leveraging AI to enhance their operations.
How can companies detect AI-generated deepfakes during interviews?
Look for pixellation around the edges of faces, eyes, ears, and glasses, as well as inconsistencies in how light interacts with the face.
What is the primary motivation behind this scam?
To generate revenue for the North Korean government and fund its weapons programs.

Staying ahead of this evolving threat requires vigilance, investment in security measures, and a proactive approach to risk management. The line between legitimate applicants and state-sponsored operatives is becoming increasingly blurred, demanding a new level of scrutiny in the hiring process.

Explore further: Learn more about cybersecurity best practices and threat intelligence on the Microsoft Security Blog.

You may also like

Leave a Comment