The authorization problem that could break enterprise AI

The authorization problem that could break enterprise AI

The Silent Security Crisis: How Agentic AI is Rewriting the Rules of Identity Management

When an AI agent needs to log into your CRM, pull records from your database, and send an email on your behalf, whose identity is it using? And what happens when no one knows the answer? This isn’t just a technical question anymore—it’s becoming one of the most critical security challenges facing enterprises today.

Alex Stamos, chief product officer at Corridor, and Nancy Wang, CTO at 1Password, joined the VB AI Impact Salon Series to dig into the new identity framework challenges that come along with the benefits of agentic AI. What they revealed is a perfect storm of innovation outpacing security infrastructure, with potentially catastrophic consequences.

The Identity Crisis No One Saw Coming

“At a high level, it’s not just who this agent belongs to or which organization this agent belongs to, but what is the authority under which this agent is acting, which then translates into authorization and access,” Wang explained. This deceptively simple question is unraveling decades of established security practices.

The problem is that agentic AI doesn’t fit neatly into our existing identity frameworks. These autonomous systems need to act on behalf of humans, but they don’t have the same constraints, awareness, or accountability. They’re not malicious, but they’re also not human—and that’s where the danger lies.

How 1Password Ended Up at the Center of the Agent Identity Problem

Wang traced 1Password’s path into this territory through its own product history. The company started as a consumer password manager, and its enterprise footprint grew organically as employees brought tools they already trusted into their workplaces.

“Once those people got used to the interface, and really enjoyed the security and privacy standards that we provide as guarantees for our customers, then they brought it into the enterprise,” she said. The same dynamic is now happening with AI, she added. “Agents also have secrets, or passwords, just like humans do.”

This evolution from consumer to enterprise to AI agent security provider mirrors the broader tech industry’s trajectory. What started as a simple password manager is now grappling with questions about how autonomous systems authenticate, what permissions they should have, and how to audit their actions.

The Developer Behavior That’s Keeping Security Teams Up at Night

Stamos said one of the most common behaviors Corridor observes is developers pasting credentials directly into prompts, which is a huge security risk. Corridor flags it and sends the developer back toward proper secrets management.

“The standard thing is you just go grab an API key or take your username and password and you just paste it into the prompt,” he said. “We find this all the time because we’re hooked in and grabbing the prompt.”

This isn’t just lazy behavior—it’s a fundamental mismatch between how humans think about security and how AI agents need to operate. Developers are used to having direct control over their credentials. They’re comfortable with the cut-and-paste workflow because it gives them immediate feedback and control.

But when that same workflow is applied to AI agents, it creates massive vulnerabilities. An agent that has permanent credentials can operate indefinitely without oversight. If those credentials are compromised, the damage can spread silently across an entire infrastructure.

The False Positive Problem That Could Crash Your Entire Development Workflow

Another challenge in building feedback between security agents and coding models is false positives, which very friendly and agreeable large language models are prone toward. Unfortunately, these false positives from security scanners can derail an entire code session.

“If you tell it this is a flaw, it’ll be like, yes sir, it’s a total flaw!” Stamos said. But, he added, “You cannot screw up and have a false positive, because if you tell it that and you’re wrong, you will completely ruin its ability to write correct code.”

This is a fundamentally different challenge than traditional security scanning. Static analysis tools can afford to be conservative because they’re running in the background. Security agents working with AI developers need to be right every single time, with latency measured in hundreds of milliseconds.

The engineering challenge here is immense. You need to balance precision and recall in a way that traditional tools never had to consider. A false negative could allow a vulnerability through. A false positive could destroy developer productivity and trust in the security system.

Authentication is Easy, Authorization is Where Things Get Hard

“An agent typically has a lot more access than any other software in your environment,” noted Spiros Xanthos, founder and CEO at Resolve AI, in an earlier session at the event. “So, it is understandable why security teams are very concerned about that. Because if that attack vector gets utilized, then it can both result in a data breach, but even worse, maybe you have something in there that can take action on behalf of an attacker.”

This insight gets to the heart of why agent identity is such a thorny problem. Authentication—proving that an agent is who it claims to be—is relatively straightforward. We have standards for that. But authorization—determining what that agent is allowed to do—is where things get complicated.

The principle of least privilege should be applied to tasks rather than roles. “You wouldn’t want to give a human a key card to an entire building that has access to every room in the building,” Wang explained. “You also don’t want to give an agent the keys to the kingdom, an API key to do whatever it needs to do forever. It needs to be time-bound and also bound to the task you want that agent to do.”

The Standards War That’s Already Being Lost

Wang pointed to SPIFFE and SPIRE, workload identity standards developed for containerized environments, as candidates being tested in agentic contexts. But she acknowledged the fit is rough.

“We’re kind of force-fitting a square peg into a round hole,” she said.

The industry is in a standards war, but it’s not the kind of standards war that produces clear winners quickly. Stamos pointed to OIDC extensions as the current frontrunner in standards conversations, while dismissing the crop of proprietary solutions.

“There are 50 startups that believe their proprietary patented solution will be the winner,” he said. “None of those will win, by the way, so I would not recommend.”

The problem is that enterprises can’t wait for standards to emerge. They’re deploying agentic AI now, and they need solutions that work today. This creates a dangerous gap between what’s available and what’s secure.

When Edge Cases Become Mainstream at Scale

On the consumer side, Stamos predicted the identity problem will consolidate around a small number of trusted providers, most likely the platforms that already anchor consumer authentication. Drawing on his time as CISO at Facebook, where the team handled roughly 700,000 account takeovers per day, he reframed what scale does to the concept of an edge case.

“When you’re the CISO of a company that has a billion users, corner case is something that means real human harm,” he explained. “And so identity, for normal people, for agents, going forward is going to be a humongous problem.”

This is the scaling problem that keeps security professionals up at night. What seems like an edge case in a lab environment becomes a daily occurrence at enterprise scale. And when you’re dealing with billions of interactions, even a 0.1% failure rate represents millions of potential security incidents.

The Path Forward: Building Identity Infrastructure from Scratch

Ultimately, the challenges CTOs face on the agent side stem from incomplete standards for agent identity, improvised tooling, and enterprises deploying agents faster than the frameworks meant to govern them can be written. The path forward requires building identity infrastructure from scratch around what agents actually are, not retrofitting what was built for the humans who created them.

This means rethinking everything from how credentials are issued and managed to how access is audited and revoked. It means creating new standards for agent identity that can scale to billions of autonomous actions while maintaining the security and accountability that enterprises require.

The good news is that the industry is aware of these challenges and actively working on solutions. The bad news is that the window for getting this right is closing rapidly as agentic AI adoption accelerates.


Tags

agentic AI, identity management, security agents, AI authentication, authorization, least privilege, SPIFFE, SPIRE, OIDC, credential management, secrets management, false positives, developer security, enterprise security, AI governance

Viral Sentences

The AI identity crisis is bigger than you think

Your AI agent might be the weakest link in your security chain

Developers are pasting credentials into AI prompts right now

False positives could destroy AI development productivity

The edge case problem at scale means millions of security incidents

Agentic AI needs its own identity framework, not human ones

Your next data breach might come from an AI agent you trusted

The standards war for AI identity is already being lost

Scale makes every corner case a real human problem

Building identity infrastructure for AI from scratch

Viral Words

Autonomous, credentials, compromised, vulnerability, audit trail, scoped access, ephemeral credentials, least privilege, workload identity, federated identity, zero trust, attack vector, data breach, security incident, enterprise scale, agent behavior, task-bound, time-limited, accountability, governance framework

Viral Phrases

“Whose identity is the AI using?”

“The keys to the kingdom for AI agents”

“Force-fitting square pegs into round holes”

“When edge cases become mainstream”

“Building from scratch, not retrofitting”

“The window for getting this right is closing”

“Billions of autonomous actions need accountability”

“The weakest link in your security chain”

“Real human harm at scale”

“Security that works today, not tomorrow”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *