AI is Everywhere, But CISOs are Still Securing It with Yesterday’s Skills and Tools, Study Finds


The AI Security Crisis: Why Security Leaders Are Flying Blind in the Age of Artificial Intelligence

The rapid adoption of artificial intelligence across enterprise environments has created a perfect storm of security challenges that most organizations are woefully unprepared to handle. According to the groundbreaking AI and Adversarial Testing Benchmark Report 2026 from Pentera, a staggering 67% of Chief Information Security Officers (CISOs) report having limited visibility into how AI is being used across their organizations—with not a single respondent claiming full visibility into AI deployments.

This isn’t just another incremental security challenge. We’re witnessing a fundamental shift in how technology operates within enterprises, and traditional security approaches are breaking down at an alarming rate.

The Visibility Crisis: AI Systems Operating in the Shadows

Modern AI deployments don’t exist in isolation. They’re woven throughout existing corporate infrastructure—embedded in cloud platforms, integrated with identity systems, connected to applications, and flowing through data pipelines. The problem? Ownership is scattered across multiple teams, from data science to DevOps to business units, creating a fragmented landscape where no single entity has a complete picture.

Think about what this means in practical terms: security teams cannot answer basic questions about AI systems. Which identities do these systems use? What data can they access? How do they behave when security controls fail? What happens when an AI system makes autonomous decisions that violate security policies?

The report reveals a sobering reality: organizations are essentially operating AI systems in the dark, with no clear understanding of their attack surface, data exposure, or potential for misuse. This isn’t just a technical problem—it’s a fundamental governance failure that leaves organizations exposed to risks they cannot even identify, let alone mitigate.

The Skills Gap: Where Budget Meets Reality

Here’s where the story gets particularly interesting. Despite AI security being a hot topic in boardrooms and executive discussions, the primary barrier isn’t financial—it’s human capital. The report identifies lack of internal expertise as the top obstacle, cited by 50% of CISOs, followed closely by limited visibility (48%) and insufficient AI-specific security tools (36%).

Only 17% of respondents pointed to budget constraints as their primary concern. This data point is crucial because it reveals that organizations are willing to invest in AI security, but they’re hitting a wall when it comes to finding people who actually understand how to secure these systems.

The skills shortage isn’t just about knowing traditional security concepts. AI systems introduce entirely new behaviors that security professionals are still learning to assess: autonomous decision-making capabilities, indirect access paths that traditional controls can’t detect, and privileged interactions between systems that create unexpected attack vectors.

Legacy Controls: The Band-Aid Solution

In the absence of AI-specific security tools and expertise, organizations are doing what they’ve always done—extending existing security controls to cover new technology. The report found that 75% of CISOs are relying on legacy security controls, such as endpoint protection, application security, cloud security, and API security tools, to protect their AI infrastructure.

Only 11% reported having security tools specifically designed to secure AI systems. This approach mirrors what happened during previous technology shifts, where organizations initially adapt existing defenses before more tailored security practices emerge. But there’s a critical difference: AI systems don’t just use existing infrastructure—they fundamentally change how that infrastructure behaves.

Traditional security controls were designed for predictable, rule-based systems. AI introduces probabilistic behavior, continuous learning, and autonomous decision-making that can create security gaps these controls were never designed to detect or prevent.

The Path Forward: Building AI Security from the Ground Up

The findings paint a clear picture: AI security challenges stem from foundational gaps rather than a lack of awareness or intent. As AI becomes increasingly central to enterprise operations, organizations must shift their focus from simply deploying AI to understanding how to secure it effectively.

This requires building expertise from the ground up, developing new validation methodologies for AI-specific security controls, and creating visibility into AI systems that goes beyond traditional monitoring approaches. It’s not enough to know that AI exists in your environment—you need to understand how it operates, what it can access, and how it might be exploited.

The report suggests that organizations will need to invest heavily in training, develop new security testing methodologies that account for AI’s unique characteristics, and create governance frameworks that can keep pace with AI’s rapid evolution.

The AI security crisis isn’t coming—it’s already here. The question is whether organizations will recognize the severity of the challenge and take action before the first major AI security breach makes headlines, or whether they’ll continue operating in the dark until it’s too late.

For security leaders, the message is clear: the traditional playbook won’t work for AI. It’s time to acknowledge the gaps, invest in the right expertise, and build security approaches that can keep pace with AI’s transformative potential—before that same potential becomes a liability.

This article was written by Ryan Dory, Director of Technical Advisors at Pentera.

#AIsecurity #Cybersecurity #ArtificialIntelligence #CISOs #SecurityLeadership #AITransformation #EnterpriseSecurity #TechLeadership #AIgovernance #SecuritySkills #AIinfrastructure #CyberThreats #DigitalTransformation #SecurityVisibility #AITrends

“A majority of security leaders are struggling to defend AI systems with tools and skills that are not fit for the challenge”

“67 percent of CISOs reported limited visibility into how AI is being used across their organization”

“Lack of internal expertise (50 percent) is the top obstacle to securing AI infrastructure”

“Only 11 percent reported having security tools specifically designed to secure AI systems”

“Organizations are essentially operating AI systems in the dark, with no clear understanding of their attack surface”

“The skills shortage isn’t just about knowing traditional security concepts”

“Traditional security controls were designed for predictable, rule-based systems”

“AI introduces probabilistic behavior, continuous learning, and autonomous decision-making”

“The AI security crisis isn’t coming—it’s already here”

“Organizations are willing to invest in AI security, but they’re hitting a wall when it comes to finding people who actually understand how to secure these systems”

“The traditional playbook won’t work for AI”

“Before that same potential becomes a liability”,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *