The ‘superuser’ blind spot: Why AI agents demand dedicated identity security

Unsecured AI agents may be holding hidden keys to your enterprise data. 

About the Author

Arnab Bose

Chief Product Officer, Workforce Identity Cloud

Arnab is the Chief Product Officer for all of Okta’s Workforce Identity solutions. Prior to joining Okta, Arnab was a VP of Product Management at Salesforce, where he led several efforts from integrating Quip into the Salesforce Platform, to Process Automation. Before Salesforce, he was a Senior Program Manager Lead at Microsoft. Arnab holds a BS in Operations Research and Industrial Engineering from Cornell University. When he’s not at work Arnab enjoys cooking, bicycling, and track days at Sonoma Raceway.

23 June 2025 Time to read: ~

The strength of a door does not matter if an intruder has a key. This statement is true in both the physical and the digital world. 

When cybersecurity professionals discuss the importance of least privilege and concepts like Zero Trust and micro segmentation, they are really discussing who should have what keys and to which doors. Identity, authentication, and authorization are pillars of effective enterprise security — not just for users but also for applications, machines, and services. AI agents can now be added to that list. 

Industry experts predict that the use of AI agents will explode in the coming years. Some 88% of the senior executives who participated in PwC’s May 2025 AI Agent Survey said their team or business function expects to increase AI-related budgets in the next 12 months in connection with agentic AI. Nearly 80% said their companies are already adopting AI agents. 

The reason for the excitement is simple. AI agents offer immense productivity gains for enterprises and their customers. However, with the promise of productivity comes a broadened attack surface for organizations to protect. Without effective oversight and management, AI agent adoption can become another form of shadow IT, where enterprises lack insight into the purpose, activity, and access rights of the agents in their environment. As agents proliferate, organizations need clear visibility into their access rights and granular control of the authorization process to be able to trust that agents will operate without compromising security and compliance. 

The hidden keys: Securing AI agents’ access 

So often, however, security and convenience feel as if they are at odds. Giving a non-human identity superuser permissions is convenient. Any access that the agent, application, or services need from that point forward, it can have. But that practice increases risk — if the application or service gets compromised, the attacker can use it to expand their reach and further infiltrate the organization.

This problem is magnified when it comes to AI agents. Like other non-human identities (NHIs), AI agents often lack clear ownership and human oversight. But unlike those other NHIs, AI agents behave autonomously, meaning they can act in ways security teams may not expect. Their access needs are dynamic, and completing tasks may lead them to look for ways to take advantage of interconnected permissions and inadvertently expose or leak data. 

Working autonomously, agents can chain together permissions to access applications or resources they should not be authorized to, causing information to be shared in ways that are invisible to security teams. In the world of always-available AI agents, security means being able to provide granular access policies and revoke access and authorization after specified periods. 

Just as the ability to collaborate with other agents and communicate with various applications to complete complex, multi-step tasks makes agentic AI attractive, it also introduces risk. Agent workflows and dynamic tool invocation mean threat actors have more potential soft spots to attack. For example, an AI agent integrated with a vulnerable plug-in will inherit that vulnerability. Threats such as insecure output handling and prompt injection, where threat actors use malicious prompts to dupe AI agents into taking illegitimate actions, can have a cascading impact throughout an organization, depending on the scope of the agent’s authorizations and access rights. 

Identity is the new security perimeter

Addressing this reality requires a strategic shift from perimeter-based approaches to an identity-centric model that provides the control and visibility organizations need to safeguard their data and systems.

Legacy protocols lack the scalability, interoperability, and security offered by more modern approaches such as the Model Context Protocol (MCP) and the Agent2Agent protocol (A2A). However, even for current approaches designed with AI agents in mind, security is still a work in progress. 

What is missing is a centralized plane that enables organizations to enforce fine-grained permissions and recognizes that each agent-to-app connection requires evaluation and authorization. The goal here is not only to apply a stronger level of management but also to ease the process of authentication and authorization itself so that agents can act on behalf of human users with minimal friction.

A new protocol to secure AI agents

As AI agent usage expands, identity will remain a critical touchstone for enterprise security. At Okta, we are developing Cross App Access, a new way to secure AI agents, which will work alongside efforts like MCP and A2A. As an extension of OAuth, Cross App Access brings visibility, control, auditability, and governance to both agent-driven and app-to-app interactions.

Cross App Access is a protocol that addresses an ecosystem-wide blind spot. As software providers race to embed AI into their platforms, the web of interactions between agents, APIs, and applications is growing exponentially. Without defined standards, each connection could introduce new security gaps. Cross App Access offers a path toward interoperability and trust by enabling B2B SaaS builders to align around a common way to govern app-to-app access. It sets the foundation for secure collaboration between platforms, giving enterprises the confidence to adopt new AI capabilities without sacrificing control. 

As always, new technology brings new security challenges. But by putting identity first, organizations can stay ahead of the curve. 

Read more about Okta’s approach.

About the Author

Arnab Bose

Chief Product Officer, Workforce Identity Cloud

Arnab is the Chief Product Officer for all of Okta’s Workforce Identity solutions. Prior to joining Okta, Arnab was a VP of Product Management at Salesforce, where he led several efforts from integrating Quip into the Salesforce Platform, to Process Automation. Before Salesforce, he was a Senior Program Manager Lead at Microsoft. Arnab holds a BS in Operations Research and Industrial Engineering from Cornell University. When he’s not at work Arnab enjoys cooking, bicycling, and track days at Sonoma Raceway.

Get our Identity newsletter

Okta newsletter image