The AI governance mirage: Why 72% of enterprises don’t have the control and security they think they do

The AI governance mirage: Why 72% of enterprises don’t have the control and security they think they do

The “Governance Mirage”: Why 72% of Enterprises Are Trapped in AI Platform Chaos

In a shocking revelation that’s sending shockwaves through the tech world, a new survey conducted by VentureBeat has exposed a massive blind spot in enterprise AI strategy. The data reveals that a staggering 72% of organizations claim to have two or more AI platforms that they identify as their “primary” layer, creating a dangerous landscape of security vulnerabilities and control failures.

The Strategic Paradox: Why Leading Enterprises Are Building Around Their Vendors

Take Mass General Brigham (MGB), the largest employer in Massachusetts with 90,000 employees. Last year, they had to shut down an uncontrolled number of internal proof-of-concepts that had sprouted up as employees got carried away with AI projects. Instead of building their own AI layer, they decided to wait for the software giants they already use to deliver on their AI roadmaps.

But here’s where it gets interesting: even then, MGB has been forced to build workarounds where those companies haven’t done enough. They’ve just completed a “full-scaled” custom build around Microsoft’s Copilot—essentially putting a “skin” around Copilot to handle the safety and data privacy concerns the major model providers haven’t yet mastered.

The Data of Disconnect: Confidence vs. Systematic Oversight

The research found that a majority (56%) of respondents said they are “very confident” that they’d detect a misbehaving AI model. However, nearly a third of respondents have no systematic mechanism to detect AI misbehavior until it surfaces through users or audits. In a world where telemetry leakage accounts for 34% of GenAI incidents, and the global average breach cost has hit $4.4M, finding out after the damage is done is the default for too many companies.

The Day-Two Bill: Managing Sprawl, Creep, and Lock-in

Brian Gracely, Senior Director at Red Hat, warns that many enterprises are falling into a trap of deceptive initial wins. “Day zero is very, very easy,” Gracely said. “Day two is when the bill comes due.”

Red Hat is positioning its software layer (OpenShift AI) as the necessary buffer to prevent enterprises from getting buried in a single provider’s proprietary ecosystem. Gracely’s point is direct: If your control system is built entirely inside one cloud provider’s toolset, you are effectively “renting a cage.”

The Dynamic Defensive: MassMutual’s Refusal to Bet

While some enterprise companies seek an “AI operating system” that oversees all of their AI technologies and apps, others are simply refusing to sign the check. Sears Merritt, CIO and head of enterprise technology at MassMutual, is managing the governance conundrum by intentionally staying in a state of high-velocity flexibility.

“Things are so dynamic, it’s hard to know which of the AI vendors will end up on top,” Merritt said. For that reason, MassMutual is refusing to enter any long-term contracts with AI vendors.

The Rise of “Platform Creep”

The leading providers are also shifting toward “managed agents,” as reflected by Anthropic’s recent announcement. This offering suggests possible continued platform creep, whereby providers like OpenAI and Anthropic take over more and more of the AI infrastructure—most specifically, in this case, the memory of agentic session details.

The Security Irony: The Fox Guarding the Hen House

The most jarring finding in our Q1 data is what we call the “Security Irony”: the fact that the providers most responsible for creating enterprise AI risk are the same ones enterprises are using to manage it.

Respondents said the top selection criterion for AI orchestration platforms was “security and permissions generally” (37.1%), beating out other criteria like cost, flexibility, control and ease of development. Yet, the market is choosing convenience over sovereignty.

The Path Forward: Toward a Unified Control Plane

So, what is the way out? Sriraman argued that the industry desperately needs a “central observability platform”—a “Dynatrace for AI”—that provides full end-to-end visibility, including model drift and safety prompting, agent behavior analytics, privilege escalation alerts, and forensic logging.

The Bottom Line: The “Big Red Button”

Visibility and integration are only half the battle. In a high-stakes industry like healthcare, Sriraman argues that any legitimate control plane must also offer a hard-stop capability. “We need a big red button,” he said. “Kill it. We should be able to have that… without that, don’t put anything in the operational setting.”


Tags & Viral Phrases:

  • 72% of enterprises trapped in AI platform chaos
  • The governance mirage exposed
  • Security irony: fox guarding the hen house
  • Day two is when the bill comes due
  • Platform creep is real
  • No single owner or accountable team
  • The swivel chair warning
  • Big red button for AI kill switch
  • Dynatrace for AI desperately needed
  • Enterprises trust no single provider
  • Renting a cage in the cloud
  • Shadow AI incidents cost $670K more
  • Hybrid control plane is the winner
  • AI vendor popularity changing radically
  • Multiple primary platforms = conflict of interest
  • MassMutual’s dynamic defensive strategy
  • Mass General Brigham’s custom Copilot skin
  • Red Hat’s OpenShift AI buffer
  • Anthropic’s managed agents take over
  • OWASP calls for kill switch
  • Enterprise AI strategy is a mess

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *