State-sponsored hackers love Gemini, Google says
Google Gemini Under Fire: State-Sponsored Hackers Are Using AI to Supercharge Cyberattacks

In a shocking revelation that has sent ripples through the cybersecurity world, Google has disclosed that its own AI model, Gemini, is being weaponized by state-sponsored hacking groups from Russia, China, North Korea, and Iran. The tech giant’s Threat Intelligence Group has released a comprehensive report detailing how these malicious actors are leveraging Gemini to automate surveillance, identify high-value targets, and even develop sophisticated malware.
The report, titled Distillation, Experimentation, and Integration: AI Adversarial Use, paints a chilling picture of how artificial intelligence is becoming an indispensable tool in the arsenal of cybercriminals. While AI has been hailed as a revolutionary technology with the potential to transform industries, this latest revelation underscores its darker side.
The AI Advantage: How Hackers Are Using Gemini
Large language models like Gemini are particularly adept at processing and analyzing vast amounts of data—a task that would take human teams years to complete. This capability is being exploited by hackers to sift through mountains of information, identify software vulnerabilities, and pinpoint potential targets for cyberattacks.
One of the most alarming examples highlighted in the report involves a Chinese hacking group known as APT31. The group used Gemini to test out the Hexstrike MCP tooling, a system that connects AI agents with existing security tools to identify vulnerabilities. The problem? Gemini can’t distinguish between legitimate security researchers (white hats) and malicious hackers (black hats), as their work often overlaps. This means that the same tools and techniques used to protect systems can be repurposed to exploit them.
From Surveillance to Social Engineering
The report also reveals that hacking groups associated with China and Iran have been running more sophisticated campaigns. These include debugging exploit code, developing proof-of-concept exploits for known vulnerabilities, and engaging in social engineering tactics. For instance, an Iranian-linked group was found developing a proof-of-concept exploit for a well-known flaw in WinRAR, a popular file compression tool.
But it’s not just about technical exploits. The report highlights how AI is being used to create and disseminate propaganda. Threat actors from China, Iran, Russia, and Saudi Arabia are producing political satire and propaganda to advance specific ideas across both digital platforms and physical media, such as printed posters. This “AI slop,” as the report calls it, is becoming increasingly prevalent, blurring the lines between genuine content and manipulated narratives.
Google’s Response: Restricting Access
In response to these findings, Google claims to have restricted access to Gemini for users it can confidently identify as malicious, including the detected state-sponsored hacking teams. However, the effectiveness of these measures remains to be seen, as hackers continue to find new ways to circumvent security protocols.
The Broader Implications
This revelation raises important questions about the ethical use of AI and the responsibilities of tech companies in preventing their technologies from being misused. While AI has the potential to revolutionize fields like astronomy and cancer research, its misuse in cybercrime highlights the need for robust safeguards and ethical guidelines.
As AI continues to evolve, so too will the tactics of cybercriminals. The challenge for tech companies, policymakers, and cybersecurity experts is to stay one step ahead, ensuring that the benefits of AI are harnessed for good while mitigating its potential for harm.
Tags: Google Gemini, AI hacking, state-sponsored cyberattacks, cybersecurity, APT31, WinRAR vulnerability, AI propaganda, machine learning, threat intelligence, Hexstrike MCP tooling, social engineering, malware development, AI ethics, cybercrime, digital surveillance.
Viral Sentences:
- “AI isn’t just for raising prices and filling feeds—it’s also a hacker’s best friend.”
- “Google’s own AI, Gemini, is being weaponized by state-sponsored hackers.”
- “Hackers are using AI to automate surveillance and identify high-value targets.”
- “The same tools used to protect systems can be repurposed to exploit them.”
- “AI slop is thick on the ground, from digital platforms to printed posters.”
- “Google claims to have restricted access to Gemini for malicious users.”
- “The challenge is to stay one step ahead in the AI arms race.”
- “AI’s potential for good is undeniable, but so is its potential for harm.”
,



Leave a Reply
Want to join the discussion?Feel free to contribute!