How C-suite leaders are taming shadow AI

À propos de l’auteur

Brian Prince

Newsroom Reporter

Brian Prince is a marketing content creator and former journalist who has been focused on cybersecurity for more than 15 years.

11 février 2026 Temps de lecture: ~

Whether business leaders are ready or not, AI agents are transforming how companies do business. As recent studies have shown, employees are turning to AI to achieve productivity gains, even if it means doing so outside the IT department's control.

A 2025 Gartner survey found that 69% of organizations suspect or have evidence that employees are using prohibited public GenAI, and Gartner projects that by 2030, more than 40% of enterprises will experience security or compliance incidents due to unauthorized shadow AI.

Having any IT security blind spots is dangerous enough, but the speed, autonomy, and privileges of AI agents can create a potent recipe for risk. An unmanaged agent with access to sensitive information can violate compliance regulations, leak information, or expand the blast radius of an attack if it falls under the control of a threat actor.

Faced with this shift, how are organizations securing the agentic enterprise? We spoke with C-suite leaders to find out. Here are their strategies for transforming shadow AI into secure innovation. 

1. Prioritize visibility over restriction

Studies have shown that workers are willing to circumvent official guidelines in the name of leveraging the tools that will help them do their jobs. A 2025 KPMG survey, for example, found that nearly half of US workers were using AI tools at work in unauthorized ways. Bryan McGowan, Global Trusted AI Leader at KPMG, explained in an interview that employees often use unauthorized tools because they offer greater speed and ease of use than approved ones.

Attempting to block these tools may be counterproductive, says Amar Akshat, SVP Technology and Chief Architect at Paysafe.

"If you block things, you are just blocking visibility,” he says. “The landscape of AI is evolving so fast that people will use AI on a day-to-day basis, and blocking it is just encouraging them not to be visible around using it.”

If you block things, you are just blocking visibility,” he says. “The landscape of AI is evolving so fast that people will use AI on a day-to-day basis, and blocking it is just encouraging them not to be visible around using it.”

2. Lower the friction to sanctioned AI tools

To prevent employees from choosing their own unvetted tools, James Simcox, Chief Operations and Product Officer of Equals Money, suggests that leaders must be proactive.

"We proactively pushed out ChatGPT to our staff early on because we were worried that if we didn't do something, they would do it themselves,” he says.

Shadow IT isn’t a new problem — employees have always brought in new tools — but AI agents, with their broad access to data and systems, add a fresh layer of risk, says Simcox.

“If you accidentally bring in an AI agent type product that's hooked up to a bunch of services and no one notices, that's a real problem,” adds Simcox.

Of course, employees can’t follow policies they’re not aware of. Educating employees about which tools are sanctioned, how to use them productively, and the rules governing the use of corporate data helps reduce risk and empower your workforce.

“If you accidentally bring in an AI agent type product that's hooked up to a bunch of services and no one notices, that's a real problem,”

3. Manage access and mitigate excessive permissions

The key to fostering a secure AI landscape is managing the identities and permissions of non-human agents with the same degree of rigor as humans. Unfortunately, governance efforts often lag behind AI use. Okta's AI at Work 2025 report revealed that only 36% of organizations had a centralized governance model for AI. In addition, just 10% of respondents reported their organization had a well-developed strategy or roadmap for managing non-human identities (NHIs).

This reality leaves organizations with a fundamental challenge: how do they manage a situation that can quickly become unwieldy?

The risk is compounded by permission inheritance, where an AI agent assumes the full rights of the user who created it. “If you're an admin on our CRM, you might have access to everything, and when you're struggling to get that AI tool to work, you might say, ‘screw it, have all the permissions I have, use my admin account’ and see what happens,” says Simcox. This behavior creates a sprawling network of over-privileged agents. 

The first step toward unifying visibility and control is discovering what AI agents are in your environment, Simcox says. Having the right tools in place helps.

“I can't have the staff just looking across every single platform to see if every single tool has the right identity or the right access that it needs for the role it has,” Simcox continues. 

That’s where Identity Security Posture Management (ISPM) comes in, adds Simcox. ISPM can provide continuous, automated discovery of any identity — including AI agents — and flag those with excessive access before they can be exploited.

“Non-human identities and AI agents are the two fastest growing attack surfaces for enterprises,” says Jack Hirsch, VP of Product Management at Okta. “Fragmented visibility and control only raises the risk of data breaches, compliance violations, biased AI outputs, and other challenges.”

“Non-human identities and AI agents are the two fastest growing attack surfaces for enterprises,” says Jack Hirsch, VP of Product Management at Okta. “Fragmented visibility and control only raises the risk of data breaches, compliance violations, biased AI outputs, and other challenges.”

The path forward: Identity is the foundation for AI

By treating every AI agent as a primary identity — with its own lifecycle, permissions, and oversight — leaders can move past the era of shadow AI and into an era of secure, agentic innovation.

“The enterprise of the future will be driven by automation and powered by AI,” says Hirsch, “and the foundation of that future must include comprehensive visibility and identity governance.” 

Looking to pull AI out of the shadows? Learn how to discover shadow AI, uncover hidden identity risks and misconfigurations, and map agents’ blast radius with Agent Discovery in ISPM, available now.

Image Image

AI agents are scaling fast. Is your security keeping up?

À propos de l’auteur

Brian Prince

Newsroom Reporter

Brian Prince is a marketing content creator and former journalist who has been focused on cybersecurity for more than 15 years.

Découvrez notre newsletter sur l'identité

Image de la newsletter Okta