AI threats will get worse: 6 ways to match the tenacity of your digital adversaries
AI-Powered Cyber Threats Are Here: What You Need to Know to Stay Safe
ZDNET’s Key Takeaways
- AI is rapidly being weaponized by threat actors for sophisticated attacks
- Organizations must evolve their security practices to match growing threats
- Experts recommend specific strategies to protect against AI-powered cybercrime
When was the last time you questioned whether that mysterious phone caller who hung up after you said “hello” was recording your voice for malicious purposes? The FCC warned about such scams nearly a decade ago, before artificial intelligence was even on the scene. Now, with AI capable of cloning your voice from as little as three seconds of audio, the stakes have skyrocketed.
Whether used for legitimate or nefarious purposes, AI’s chief selling proposition has been its ability to deliver speed and scale. In the hands of a threat actor, a tremendous amount of damage can be done in the blink of an eye. And it’s getting worse. Your only meaningful response is to match your adversaries’ tenacity.
Threat Actors Are Rapidly Adapting AI to Their Tactics
In its January 2025 report on Adversarial Misuse of Generative AI, Google’s Threat Intelligence Group (GTIG) revealed that threat actor reliance on AI tools was initially limited to basic productivity use cases. “Rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass safety controls,” the report stated.
However, by November 2025, GTIG noted significant advancements: “Adversaries are no longer leveraging artificial intelligence just for productivity gains; they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.”
Anthropic’s March 2025 report on detecting malicious uses of its Claude LLM highlighted similar concerns: “The most novel case of misuse detected was a professional ‘influence-as-a-service’ operation showcasing a distinct evolution in how certain actors are leveraging LLMs for influence operation campaigns.”
The Unstoppable Advancement of Deepfake Technology
Perhaps the most concerning evolving threat is the increasingly convincing nature of deepfake videos, images, and audio. The February 2025 launch of ByteDance’s Seedance 2.0 demonstrated just how far this technology has come, producing an incredibly convincing scene of Tom-generated celebrities that sparked swift backlash from the entertainment industry.
LastPass director of AI innovation Alex Cox told ZDNET that Seedance represents a concerning inflection point: “AI can produce content that is almost indistinguishable, if not completely indistinguishable, from real human activity. We’ve gotten to the point of multimodal AI capabilities where most forms of online human interaction can be believably faked by AI.”
Cox predicts that AI-powered video and audio tools will evolve to the point where we can be easily tricked into believing we’re dealing with an authentic person—even in video meetings—when it’s actually a deepfake. “Right now, AI tech can’t do this in real-time,” he said. “There is still latency and artifacts involved that give people that uncanny ‘valley’ feeling. But we are rapidly approaching parity in this area.”
Text and Still Images: Already a Lost Cause?
When it comes to static mediums like text and still images, we’ve already crossed that threshold. Two years ago, Sports Illustrated and TheStreet were caught publishing articles by fake AI-generated authors, causing reputational damage and resulting in the dismissal of top executives.
These incidents speak to the increasing frequency of purposeful deceit and the greater likelihood that many online identities we encounter could be inauthentic, supporting a range of highly questionable motives.
Six Ways to Defend Yourself—Starting Now
Don’t wait for an AI-enabled attack before taking action. Experts warn that it’s time for individuals, IT professionals, and organizations alike to evolve their best practices and vigilance accordingly to reduce the likelihood of a catastrophic event at the hands of AI-equipped cybercriminals.
1. Stay Fanatically Educated on Evolving Threats
Stay up-to-date on AI safety and security, and aware of the evolving threat landscape. Pay close attention to the most important sources of information, such as the threat intelligence and AI safety groups at frontier AI developers Anthropic, Google DeepMind, and OpenAI. Set up your feeds to become aware of new information as it’s made available by various reputable cybersecurity and threat intelligence sources.
2. Move to Non-Phishable Credentials
Be aggressive about moving to non-phishable passwordless credentials, including passkeys and number-matching-based multifactor authentication. The majority of successful attacks start with some form of phishing or vishing. With the help of AI and voice cloning, phishing and vishing attacks will become more convincing. The sooner you and your company make the move to non-phishable credentials, the better.
3. Identify All Your Agents
Before making the move to agentic AI, ensure you have a way to identify every legitimate agent within your control or your organization’s infrastructure. Vendors like Microsoft, Okta, and Ping Identity offer identity and access management solutions that manage the identities of agents on your network, much as you manage human identities. Although agentic AI is likely to yield enormous productivity gains, legitimate agents are a target-rich environment for malicious agents.
4. Embrace Zero Trust
Employ a zero-trust strategy wherever possible. Yes, certain people, organizations, processes, and even AI agents will need access to various resources and systems of record to execute their responsibilities. But always start them out with few or even no privileges to see what breaks. Trust should be earned, not the default.
5. Know Your OAuth Tokens
Get smarter about your OAuth token exposure. You may not know it, but you’ve likely issued one or more OAuth tokens that allow one service to access another on your behalf. Such delegations of authority are expected to multiply by several orders of magnitude once agentic AI takes off. But here’s the question: Do you know all the OAuth tokens you’ve issued? And for those that you do, do you know how to revoke them?
6. Stay Skeptical
Become a skeptic if you aren’t one already. As it becomes increasingly difficult to tell the difference between legitimate content and deepfakes, now is the time to become less trusting of everything you see or hear online. Err on the side of caution and always double-check the authenticity of such messages.
Know Your Enemy
When considering these and other ways to uplevel your operational security, put yourself in the shoes of your adversaries, given what AI can do now and in the near future. If you were your adversary, you would likely exhaust every AI option that exists to achieve your objective. And the better AI gets at helping you achieve that malicious objective, the more defenseless your victims will seem.
Oh, and about those mysterious callers who wait for you to say “yes” or “hello” and then hang up—perhaps you should consider not answering calls from unknown numbers (or, at a minimum, waiting for the caller to speak first). It’s a bitter pill to swallow, but then again, keep the idea of zero trust in mind.
Tags: AI cybersecurity, deepfake technology, phishing prevention, zero trust security, agentic AI, OAuth tokens, voice cloning, cyber threats 2025, AI-powered attacks, passwordless authentication
Viral phrases: “AI-powered cybercrime is here,” “The deepfake arms race,” “Zero trust is your only defense,” “Voice cloning scams are getting smarter,” “Agentic AI: The next frontier of cyber threats,” “Your credentials are already compromised,” “The uncanny valley is disappearing,” “Trust nothing you see online,” “AI is the new weapon of choice for hackers,” “The future of cybersecurity is now”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!