Microsoft and ServiceNow’s exploitable agents reveal a growing – and preventable – AI security crisis

Microsoft and ServiceNow’s exploitable agents reveal a growing – and preventable – AI security crisis

AI Agents Pose New Security Risks as Lateral Movement Threats Emerge

Two major security vulnerabilities in agentic AI systems have cybersecurity experts sounding alarms about the risks of autonomous AI agents in enterprise environments.

ServiceNow’s “BodySnatcher” Vulnerability

Security researchers at AppOmni Labs uncovered a critical vulnerability in ServiceNow’s platform dubbed “BodySnatcher.” This flaw allowed unauthenticated attackers to impersonate administrators and gain unauthorized access to sensitive corporate data using only a target’s email address.

“Imagine an attacker halfway across the globe with no credentials, able to remote control an organization’s AI and override security controls,” said Aaron Costello, AppOmni Labs’ chief of research. The vulnerability could grant access to customer Social Security numbers, healthcare information, financial records, and intellectual property.

ServiceNow addressed the issue in October 2025 with a security update, though the company maintains the vulnerability was never exploited in the wild.

Microsoft’s “Connected Agents” Default Setting

Meanwhile, Zenity Labs discovered that Microsoft’s Copilot Studio creates agents with a “connected agents” feature enabled by default. This allows other agents—registered or not—to connect to and leverage the capabilities of these agents.

“While not technically a vulnerability, this creates significant risk,” said Michael Bargury, Zenity Labs co-founder and CTO. The feature enables malicious agents to connect to legitimate, privileged agents and exploit their capabilities, particularly those with email-sending functions or access to sensitive business data.

Microsoft views this as a feature rather than a bug, stating that disabling it universally would break core productivity scenarios. The company recommends IT administrators manually disable the feature for agents that access sensitive resources.

The Lateral Movement Threat

Both vulnerabilities highlight a critical concern: AI agents can be used for lateral movement across corporate networks, allowing threat actors to escalate privileges and access sensitive systems. This mirrors early software development days when security was often an afterthought.

Google’s cybersecurity leaders have identified “shadow agents”—independently deployed AI agents outside corporate approval—as a critical challenge for 2026. These uncontrolled agents create invisible pipelines for data leaks, compliance violations, and intellectual property theft.

Expert Recommendations

Cybersecurity experts recommend adopting a “least privilege” approach when deploying AI agents:

  • Start with minimal access permissions
  • Only grant necessary privileges for specific tasks
  • Implement detailed monitoring and tracing of agent activities
  • Disable unnecessary connection features by default

As organizations rapidly deploy AI agents for productivity gains, security experts warn that the agentic surface of IT estates will become increasingly attractive targets for threat actors seeking easy lateral movement opportunities.

Tags: #AIsecurity #cybersecurity #agenticAI #lateralmovement #Servicenow #Microsoft #Copilot #AIagents #data breach #cyberthreats #enterpriseAI #shadowAI #leastprivilege #AppOmni #ZenityLabs

Viral Phrases: AI agents as threat actor’s fantasy, BodySnatcher vulnerability, shadow agents challenge, lateral movement nightmare, autonomous AI security risks, AI agents running wild, connected agents feature, remote control AI vulnerability, agent-to-agent exploitation, enterprise AI security crisis, AI agents causing unprecedented damage, Microsoft Copilot security flaw, ServiceNow AI vulnerability, AI agent identity management, least privilege posture for AI.

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *