AI is already making online crimes easier. It could get much worse.

AI is already making online crimes easier. It could get much worse.

AI-Powered Ransomware? Not Quite—But the Real Threat Is Already Here

In the fast-moving world of cybersecurity, few stories have sparked as much intrigue—and confusion—as the recent revelation of “PromptLock,” a piece of malware initially touted as the first AI-powered ransomware. The discovery, made by cybersecurity researchers Anton Cherepanov and Tomáš Strýček, was hailed as a watershed moment in the evolution of cyber threats. Their analysis suggested that generative AI had crossed a dangerous threshold, enabling highly adaptive and automated ransomware attacks. The news spread like wildfire, with headlines around the globe declaring a new era of AI-driven cybercrime.

But as is often the case in the digital realm, the truth proved more nuanced. Just a day after Cherepanov and Strýček published their findings, a team from New York University stepped forward to clarify: PromptLock was not an active threat lurking in the wild, but rather a research project designed to demonstrate the feasibility of automating every stage of a ransomware campaign. In other words, while the technology exists, the apocalyptic scenario of AI autonomously launching ransomware attacks remains, for now, a proof of concept rather than a present danger.

Yet, this revelation doesn’t mean we can breathe easy. The real story is that cybercriminals are already harnessing the power of artificial intelligence—not to create sentient, autonomous hackers, but to make their existing operations faster, cheaper, and more effective. Just as software engineers use AI to write code and debug programs, hackers are leveraging these same tools to streamline their attacks, lowering the barrier for entry and expanding the pool of potential adversaries.

Lorenzo Cavallaro, a professor of computer science at University College London, puts it bluntly: the prospect of more frequent and potent cyberattacks is not a distant possibility, but “a sheer reality.” The cybersecurity community is grappling with a new normal where AI acts as a force multiplier for malicious actors.

Some voices in Silicon Valley have gone so far as to warn that AI is on the cusp of launching fully automated, self-directed cyberattacks. But most security researchers dismiss these claims as overblown hype. Marcus Hutchins, principal threat researcher at Expel and renowned for stopping the WannaCry ransomware outbreak in 2017, argues that the fixation on “AI superhackers” is misguided. Instead, experts urge a focus on the immediate and tangible risks that AI is already amplifying.

One of the most pressing concerns is the explosion of AI-enhanced scams. Criminals are increasingly turning to deepfake technologies to impersonate individuals—whether it’s mimicking a CEO’s voice in a phone call or fabricating a video message—to trick victims into handing over money or sensitive information. These AI-driven deceptions are not only becoming more common but also more convincing, making them harder to detect and more damaging when successful.

The adoption of generative AI by attackers began almost as soon as tools like ChatGPT became widely available in late 2022. The first wave of AI-assisted cybercrime was, unsurprisingly, spam. According to a report from Microsoft, in the year leading up to April 2025, the company blocked $4 billion worth of scams and fraudulent transactions, many of which were likely aided by AI-generated content. Researchers at Columbia University, the University of Chicago, and Barracuda Networks estimate that at least half of all spam emails are now produced using large language models (LLMs).

But the threat extends beyond mass spam campaigns. The same research team analyzed nearly 500,000 malicious messages collected before and after ChatGPT’s launch, uncovering a troubling trend: AI is increasingly being deployed in more targeted and sophisticated schemes. Specifically, they found that the proportion of focused email attacks—those that impersonate trusted figures to trick employees into divulging funds or sensitive information—generated using LLMs rose from 7.6% in April 2024 to 14% by April 2025. This shift signals a move toward more personalized and convincing attacks, leveraging AI’s ability to craft tailored messages at scale.

The implications are clear: while the specter of autonomous, AI-driven ransomware may still be more science fiction than fact, the immediate threat posed by AI-enhanced cybercrime is very real. As these technologies become more accessible and powerful, the frequency and sophistication of attacks are set to rise. The cybersecurity community is sounding the alarm, urging individuals, businesses, and governments to bolster their defenses and stay vigilant.

In the end, the story of PromptLock serves as both a cautionary tale and a wake-up call. It reminds us that while the headlines may sometimes exaggerate the capabilities of AI in the hands of cybercriminals, the underlying reality is that the tools of tomorrow’s attacks are already being deployed today. As AI continues to evolve, so too must our strategies for defending against its misuse. The battle for cybersecurity is entering a new phase—one where adaptability, awareness, and proactive defense will be more critical than ever.


Tags & Viral Phrases:
AI-powered ransomware, PromptLock, first AI ransomware, cybersecurity breakthrough, generative AI threats, automated cyberattacks, AI in cybercrime, deepfake scams, AI-enhanced phishing, LLM-generated spam, Marcus Hutchins, WannaCry, AI superhackers, Microsoft Digital Defense Report, AI lowers barriers to cybercrime, targeted email attacks, AI-driven fraud, cybersecurity reality check, AI tools for hackers, autonomous malware, AI scam explosion, cybersecurity experts warn, AI and ransomware, New York University research, Anton Cherepanov, Tomáš Strýček, AI threat landscape, future of cyberattacks, AI in security research, AI and deepfakes, cybercrime evolution, AI-powered deception, cybersecurity preparedness, AI and fraud detection, AI in phishing schemes, AI lowers entry barriers, AI and targeted attacks, AI and spam emails, AI and personalized scams, AI and cybersecurity risks, AI and digital threats, AI and online fraud, AI and malicious automation, AI and social engineering, AI and email attacks, AI and cyber threats, AI and digital defense, AI and online scams, AI and cybersecurity challenges, AI and future threats, AI and hacking tools, AI and cybercrime trends, AI and security research, AI and digital warfare, AI and online deception, AI and fraud prevention, AI and cybersecurity awareness, AI and threat detection, AI and online security, AI and digital safety, AI and cybercrime prevention, AI and security innovation, AI and digital threats 2025, AI and cyber resilience, AI and online protection, AI and cybersecurity evolution.

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *