Enterprise MCP adoption is outpacing security controls

Enterprise MCP adoption is outpacing security controls

AI Agents Are the New Attack Surface—And Security Teams Are Flying Blind

Artificial intelligence agents are rapidly becoming the most connected and privileged software entities in enterprise environments, creating a security challenge unlike anything security teams have faced before. As these autonomous systems proliferate, they’re expanding the attack surface far beyond traditional human-operated software, yet the industry lacks the frameworks needed to govern them effectively.

“If that attack vector gets utilized, it can result in a data breach, or even worse,” warned Spiros Xanthos, founder and CEO of Resolve AI, during a recent VentureBeat AI Impact Series event. The concern is shared across the industry, with experts describing the current state as “the wild, wild West” of AI security.

Traditional security frameworks were built around human interactions—clear chains of accountability, defined access patterns, and established protocols for authentication and authorization. But AI agents operate differently. They have personas, make autonomous decisions, and can work independently across systems without direct human oversight. This creates a fundamental mismatch between existing security controls and the reality of agentic AI deployment.

The MCP Problem: Simplifying Integration While Complicating Security

Model Context Protocol (MCP) has emerged as a promising solution for simplifying agent-to-agent communication and integration with enterprise systems. By providing a standardized way for AI agents to discover and interact with tools and data sources, MCP reduces the complexity of building multi-agent workflows. However, this very simplification comes with significant security trade-offs.

MCP servers tend to be “extremely permissive” by design, according to industry experts. Unlike traditional APIs, which typically include robust authentication, authorization, and rate-limiting mechanisms, MCP servers often prioritize ease of integration over security controls. This creates a dangerous situation where agents can potentially access and manipulate enterprise systems with minimal oversight.

“We don’t even have a defined technical agent-to-agent protocol that all companies agree on,” explained Jon Aniano, SVP of Product and CRM Applications at Zendesk. “How do you balance user expectations versus what keeps your platform safe?”

The problem is compounded by the fact that enterprises are deploying AI agents at an unprecedented pace. Customer service platforms, for instance, now handle interactions at a “volume and scale that we haven’t contemplated as businesses and as a society.” This rapid adoption is outstripping the development of appropriate security frameworks and controls.

The Accountability Conundrum

Perhaps the most complex challenge is determining accountability when AI agents are involved in decision-making processes. Traditional security models assume clear lines of responsibility—a human user takes an action, and that user is accountable for the consequences. But what happens when an AI agent makes a decision based on its training and the permissions it’s been granted?

Consider a customer service scenario where a human agent consults an AI system, which then takes action on behalf of the customer. “So now you’ve got a human talking to a human that’s talking to an AI,” Aniano noted. “The human tells the AI to take action. Who’s at fault if it’s the wrong action?”

This accountability question becomes even more complex when multiple AI agents and multiple humans are involved in a workflow. Each participant in the chain could potentially influence the outcome, making it difficult to trace responsibility for security incidents or errors.

Current Approaches: Strict Controls and Gradual Expansion

In the absence of comprehensive security frameworks, enterprises are adopting various interim measures to mitigate risks. Zendesk, for example, maintains “very strict” controls over AI agent access and scope. Their agents typically can access knowledge bases and provide information, but they’re restricted from writing code or executing commands on servers.

When AI agents do need to call APIs, Zendesk requires that these interactions be “declaratively designed” and explicitly sanctioned. Actions must be specifically called out and approved before agents can execute them. This approach provides a layer of human oversight while still allowing some degree of automation.

However, customer demand for more capable AI agents is creating pressure to expand these boundaries. “We’re kind of holding the gates right now,” Aniano admitted, acknowledging that enterprises are struggling to balance innovation with security.

The Future: Standing Authorization and Trust

Looking ahead, some experts predict that AI agents may eventually be granted more trust and autonomy than human users in certain contexts. Agents don’t get tired, don’t make emotional decisions, and can consistently apply security policies. In scenarios where consistency and precision are critical, AI agents might actually be more reliable than human operators.

“We’re on the cusp of giving agents standing authorization in a few cases that are generally safe,” said Xanthos, citing coding assistance as an example. From there, the industry will likely move toward more open-ended scenarios, though risky situations that could “mutate the state of the production system” will likely remain off-limits for autonomous agents.

The key challenge will be establishing trust in AI agents’ decision-making capabilities while maintaining appropriate oversight and control. This will require new security frameworks that account for agent autonomy, clear accountability models, and robust mechanisms for monitoring and auditing agent behavior.

What Security Teams Can Do Now

While the industry works toward comprehensive solutions, security teams can take several practical steps to mitigate risks:

Implement principle of least privilege: Grant AI agents only the minimum permissions necessary to perform their designated tasks. Regularly review and adjust these permissions as agents’ roles evolve.

Establish clear audit trails: Ensure that all agent actions are logged and traceable, even when agents interact with other agents or human operators. This helps maintain accountability and enables incident investigation.

Use declarative API designs: When agents need to interact with external systems, require that these interactions be explicitly defined and approved rather than allowing open-ended tool discovery.

Start with low-risk scenarios: Begin deploying AI agents in contexts where mistakes have minimal impact, then gradually expand their responsibilities as trust and confidence grow.

Leverage existing fine-grained controls: Some tools, like Splunk, already offer index-level access controls that can be applied to AI agents. Identify and utilize these existing capabilities where available.

Implement human review checkpoints: For high-risk operations, maintain human oversight and approval requirements even as you automate other aspects of workflows.

Monitor for anomalous behavior: Establish baselines for normal agent behavior and implement monitoring to detect deviations that might indicate security issues or errors.

The Path Forward

The rapid advancement of AI agents represents both an incredible opportunity and a significant security challenge. As Xanthos noted, “There’s no going back, obviously; this is moving faster than maybe even mobile did. So the question is what do we do about it?”

The answer likely involves a combination of developing new security frameworks specifically designed for autonomous agents, adapting existing controls to account for AI-specific risks, and maintaining a healthy skepticism about granting too much autonomy too quickly. Security teams must balance the benefits of AI automation with the need to protect enterprise systems and data.

As the industry navigates this transition, collaboration between security professionals, AI developers, and enterprise leaders will be crucial. Only by working together can we develop the frameworks and controls needed to harness the power of AI agents while mitigating their risks.

Tags

AI security, agentic AI, Model Context Protocol, MCP security, enterprise AI, AI agents attack surface, autonomous agents security, AI authentication, AI accountability, enterprise security frameworks, AI governance, AI risk management, AI compliance, security automation, AI integration security, AI access controls, enterprise AI adoption, AI security challenges, AI trust and safety, AI security best practices

Viral Sentences

AI agents are the new attack surface—and security teams are flying blind

The wild, wild West of AI security: No frameworks, no protocols, just risk

MCP makes integration easy but security harder—that’s a dangerous trade-off

Who’s accountable when an AI mis-authenticates a user? The labyrinth of blame

AI agents might be more trusted than humans someday—but we’re not there yet

Standing authorization for AI agents is coming, but slowly and carefully

The gates are holding, but customer demand is flooding through

AI is moving faster than mobile did—and security isn’t keeping up

Fine-grained access controls exist, but most are human-oriented

The accountability conundrum: Human talks to human talks to AI—who’s at fault?

We’re always checking those gates and seeing how we can widen the aperture

The fear of something going wrong is what’s holding enterprises back—and that’s good

AI agents now carry more access than any other software in the environment

Traditional security frameworks are built around human interactions—not AI agents

The industry completely lacks the framework for autonomous agents

MCP servers are “extremely permissive”—worse than APIs in some ways

Multiple pieces of AI and multiple humans create accountability nightmares

The end of that spectrum tomorrow might be a specialized agent designed to do the same kind of gut feeling

Customers are on a spectrum of adoption and comfort with AI authentication

We’re entering a world where things like MCP that can auto-discover tools

There will always be very risky situations where AI mistakes could mutate the state of the production system

The volume and scale of AI interactions we haven’t contemplated as businesses and as a society

Security teams must balance innovation with security—and it’s not easy

The future of AI security depends on developing concrete standards for agent interactions

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *