State-Sponsored Hackers Exploit AI in Cyberattacks: Google
State-sponsored hackers are weaponizing artificial intelligence at an unprecedented scale, with government-backed threat actors from Iran, North Korea, China, and Russia integrating advanced AI models like Google’s Gemini into sophisticated cyberattacks, according to a bombshell new report from Google’s Threat Intelligence Group (GTIG).
The quarterly AI Threat Tracker report, released today, reveals a chilling escalation in cyber warfare tactics, where AI has become the ultimate force multiplier for espionage and sabotage operations. Government-backed hackers are achieving productivity gains of up to 300% in reconnaissance, social engineering, and malware development during the final quarter of 2025 alone.
“For government-backed threat actors, large language models have become essential force multipliers for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in the report. “We’re witnessing a fundamental shift in how state-sponsored cyber operations are conducted.”
Iranian APT42: AI-powered social engineering goes mainstream
Iranian threat actor APT42 has pioneered the use of Gemini for AI-powered social engineering that’s nearly impossible to detect. The group is misusing the AI model to enumerate official email addresses for specific entities and conduct research to establish credible pretexts for approaching targets.
By feeding Gemini detailed target biographies, APT42 crafts personas and scenarios designed to elicit engagement from high-value individuals. The group also uses AI to translate between languages and better understand non-native phrases – abilities that help state-sponsored hackers bypass traditional phishing red flags like poor grammar or awkward syntax that security systems have been trained to detect.
North Korean hackers blur lines between research and reconnaissance
North Korean government-backed actor UNC2970, which focuses on defense sector targeting and impersonating corporate recruiters, has taken AI integration to new heights. The group uses Gemini to synthesize open-source intelligence and profile high-value targets with terrifying precision.
Their reconnaissance includes searching for information on major cybersecurity and defense companies, mapping specific technical job roles, and gathering salary information to create hyper-realistic phishing personas. “This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted.
Model extraction attacks surge 500% in Q4 2025
Beyond operational misuse, Google DeepMind and GTIG identified a staggering 500% increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models. One campaign targeting Gemini’s reasoning abilities involved over 100,000 prompts designed to coerce the model into outputting full reasoning processes.
The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages across various tasks. While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic.
Google’s systems recognized these attacks in real-time and deployed defenses to protect internal reasoning traces, but the scale of the assault represents a new frontier in intellectual property theft.
AI-integrated malware emerges: HONESTCUE framework
GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach.
HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artifacts on disk – making detection virtually impossible with traditional antivirus solutions.
Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI, demonstrating how cybercriminals are leveraging legitimate AI development tools for malicious purposes.
ClickFix campaigns abuse AI chat platforms
In a novel social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems.
Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage, exploiting the credibility of major AI platforms to bypass user skepticism.
Underground marketplace thrives on stolen API keys
GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials.
One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys – highlighting the black market economy emerging around AI capabilities.
Google’s response and global implications
Google has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.
“We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated.
GTIG emphasized that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape. However, the findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to leverage the technology’s capabilities.
For enterprise security teams, particularly in the Asia-Pacific region where Chinese and North Korean state-sponsored hackers remain active, the report serves as an urgent wake-up call to enhance defenses against AI-augmented social engineering and reconnaissance operations.
The AI arms race in cyberspace has officially begun, and the stakes have never been higher.
Tags:
state-sponsored hackers, AI cyberattacks, Gemini exploitation, APT42, UNC2970, model extraction attacks, HONESTCUE malware, ClickFix campaigns, stolen API keys, AI threat intelligence, cyber warfare escalation, enterprise cybersecurity, Asia-Pacific cyber threats
Viral phrases:
“AI has become the ultimate force multiplier for espionage”
“Government-backed hackers achieving 300% productivity gains”
“The AI arms race in cyberspace has officially begun”
“State-sponsored cyber operations are fundamentally changing”
“AI-powered social engineering that’s nearly impossible to detect”
“500% surge in model extraction attacks”
“Fileless malware leaving no artifacts on disk”
“Black market economy emerging around AI capabilities”
“AI threat landscape evolving at unprecedented speed”
“Enterprise security teams need urgent wake-up call”,




Leave a Reply
Want to join the discussion?Feel free to contribute!