Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support
North Korean Hackers Weaponize Google’s Gemini AI for Cyber Espionage Operations
In a startling revelation that underscores the evolving cyber threat landscape, Google’s Threat Intelligence Group (GTIG) has exposed how North Korea-linked hacking collective UNC2970 is leveraging Google’s generative AI model Gemini to supercharge their espionage campaigns. The discovery marks a significant escalation in how state-sponsored threat actors are integrating artificial intelligence into their operational toolkits.
AI-Powered Reconnaissance: The New Frontier of Cyber Espionage
According to Google’s detailed report, UNC2970—a hacking group with clear overlaps to the infamous Lazarus Group, Diamond Sleet, and Hidden Cobra—has been utilizing Gemini to conduct sophisticated reconnaissance operations. The AI model has become an invaluable asset in their intelligence-gathering arsenal, allowing them to synthesize open-source intelligence (OSINT) and meticulously profile high-value targets.
“The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance,” GTIG stated in their comprehensive analysis. This approach has enabled UNC2970 to map out specific technical job roles, analyze salary information, and gather detailed intelligence on major cybersecurity and defense companies.
What makes this particularly concerning is how the AI effectively blurs the line between legitimate professional research and malicious reconnaissance activities. This technological camouflage allows the North Korean operatives to craft highly convincing phishing personas and identify the most vulnerable entry points for initial system compromise.
UNC2970: The Operation Dream Job Connection
UNC2970 has earned its reputation through Operation Dream Job, a long-running campaign that has specifically targeted aerospace, defense, and energy sectors. The group’s modus operandi typically involves approaching victims under the guise of legitimate job opportunities, only to deliver malware payloads that compromise organizational security.
Google’s threat intelligence team emphasized that UNC2970 has “consistently” focused on defense sector targeting, with their impersonation of corporate recruiters forming a cornerstone of their attack methodology. The integration of Gemini into these operations represents a significant evolution in their capabilities, allowing for more targeted and convincing social engineering attacks.
The Global AI Arms Race: Multiple Threat Actors Join the Fray
UNC2970 is far from alone in recognizing the strategic value of AI in cyber operations. Google’s investigation revealed that multiple sophisticated threat actors across different nation-states have integrated Gemini into their workflows, creating a concerning trend in AI-assisted cyber warfare.
China-Linked Operations Go AI-Native
Several Chinese-linked groups have embraced Gemini with remarkable enthusiasm:
Temp.HEX (Mustang Panda) has utilized the AI to compile detailed dossiers on specific individuals, particularly focusing on targets in Pakistan. The group has also gathered operational and structural data on separatist organizations across various countries, demonstrating the AI’s utility in geopolitical intelligence gathering.
APT31 (Judgement Panda) has automated vulnerability analysis and generated targeted testing plans, with operators claiming to be legitimate security researchers to mask their true intentions. This sophisticated approach allows them to identify and exploit vulnerabilities with unprecedented speed and precision.
APT41 has leveraged Gemini to extract explanations from open-source tool documentation, particularly focusing on README.md pages. The group has also used the AI to troubleshoot and debug exploit code, effectively creating a digital assistant that accelerates their development of malicious tools.
UNC795, another China-linked entity, has employed Gemini for code troubleshooting, research activities, and the development of web shells and scanners specifically designed for PHP web servers.
Iranian Cyber Operations Get an AI Upgrade
APT42, the Iranian threat actor, has taken a particularly innovative approach to AI integration. The group has used Gemini to facilitate reconnaissance and targeted social engineering by crafting personas specifically designed to induce engagement from their targets. Their activities extend beyond simple reconnaissance, including the development of a Python-based Google Maps scraper, creation of a SIM card management system in Rust, and research into exploiting the WinRAR vulnerability (CVE-2025-8088).
Advanced Malware: When AI Meets Malicious Code
Google’s investigation uncovered two particularly sophisticated examples of AI-enhanced malware that demonstrate the next generation of cyber threats.
HONESTCUE: The AI-Generated Downloader Framework
HONESTCUE represents a paradigm shift in malware architecture. This downloader and launcher framework sends prompts via Google Gemini’s API and receives C# source code as the response. Rather than using the AI to update itself, HONESTCUE calls the Gemini API to generate code that operates the “stage two” functionality—downloading and executing additional malware payloads.
The secondary stage of HONESTCUE is particularly insidious in its fileless execution. It takes the generated C# source code received from the Gemini API and uses the legitimate .NET CSharpCodeProvider framework to compile and execute the payload directly in memory. This approach leaves no artifacts on disk, making detection and forensic analysis significantly more challenging for security teams.
COINBAIT: AI-Generated Phishing at Scale
COINBAIT represents another concerning development in the AI-cybercrime intersection. This AI-generated phishing kit was built using Lovable AI and masquerades as a legitimate cryptocurrency exchange for credential harvesting purposes. The sophistication of COINBAIT’s design demonstrates how AI can be used to create convincing fake websites that can fool even security-conscious users.
Some aspects of COINBAIT-related activity have been attributed to a financially motivated threat cluster dubbed UNC5356, suggesting that AI-enhanced cybercrime is becoming increasingly commercialized and accessible to a broader range of threat actors.
The ClickFix Revolution: AI-Powered Social Engineering
Google also highlighted a recent wave of ClickFix campaigns that leverage the public sharing features of generative AI services. These campaigns host realistic-looking instructions designed to “fix” common computer issues, ultimately delivering information-stealing malware to unsuspecting victims.
This tactic was first flagged in December 2025 by cybersecurity firm Huntress, who noted the increasing sophistication of these AI-generated social engineering campaigns. The use of legitimate AI service features to distribute malware represents a clever abuse of trusted platforms and highlights the challenges of securing AI ecosystems.
Model Extraction Attacks: The Silent Threat
Perhaps most concerning is Google’s discovery of model extraction attacks targeting Gemini. These attacks systematically query proprietary machine learning models to extract information and build substitute models that mirror the target’s behavior. In one large-scale attack, Gemini was targeted by over 100,000 prompts that posed a series of questions aimed at replicating the model’s reasoning ability across a broad range of tasks in non-English languages.
Security firm Praetorian demonstrated the effectiveness of these attacks with a proof-of-concept extraction attack where a replica model achieved an 80.1% accuracy rate simply by sending 1,000 queries to the victim’s API and recording the outputs. Training this replica for just 20 epochs was sufficient to create a functional alternative to the original model.
“This creates a false sense of security,” explained security researcher Farida Shafik. “In reality, behavior is the model. Every query-response pair is a training example for a replica. The model’s behavior is exposed through every API response.”
The Implications: A New Era of AI-Enhanced Cyber Warfare
The integration of AI into cyber operations represents a fundamental shift in the threat landscape. These technologies are democratizing access to sophisticated attack capabilities, allowing even relatively unsophisticated threat actors to conduct operations that would have previously required significant expertise and resources.
The speed and scale at which AI can process information, generate convincing content, and automate complex tasks means that traditional cybersecurity defenses must evolve rapidly to address these new challenges. Organizations must now consider not just the technical vulnerabilities in their systems, but also the AI-enhanced social engineering tactics that can bypass human defenses.
Tags & Viral Keywords
AI cyber warfare, North Korean hackers, Gemini AI exploitation, UNC2970 Lazarus Group, AI-powered espionage, model extraction attacks, HONESTCUE malware, COINBAIT phishing, ClickFix campaigns, APT31 Judgement Panda, APT41 cyber operations, APT42 Iranian hackers, Temp.HEX Mustang Panda, UNC795 Chinese hackers, AI-enhanced reconnaissance, generative AI threats, cybersecurity AI arms race, state-sponsored AI hacking, AI social engineering, fileless malware techniques, cryptocurrency phishing AI, defense industrial base targeting, AI model theft, cybersecurity paradigm shift, AI-assisted vulnerability research, AI-powered malware development, enterprise AI security risks, nation-state cyber operations, AI-enhanced phishing kits, memory-resident malware, AI API abuse, cyber threat intelligence AI, AI-driven social engineering, enterprise defense AI threats, AI-powered credential harvesting, geopolitical cyber warfare, AI-enhanced spear phishing, cybersecurity automation threats, AI model replication attacks, enterprise AI security posture, AI-powered reconnaissance tools, nation-state AI capabilities, AI-enhanced cyber campaigns, enterprise threat landscape AI, AI-powered cyber espionage, cybersecurity AI integration, AI-enhanced attack automation, enterprise AI risk management, AI-powered threat actor capabilities, cybersecurity AI evolution, enterprise AI security strategy
,




Leave a Reply
Want to join the discussion?Feel free to contribute!