How customer feedback shaped Okta for AI Agents

Okta’s product leaders on building a security solution for the agentic enterprise in real time.

About the Author

Diana Blass

Journalist, Video Producer

Diana Blass is a journalist and video producer specializing in technology storytelling. As the founder of Diana Blass Productions, she creates documentary-style content and educational videos for global brands and media outlets.

08 May 2026 Time to read: ~

Vidyard video

Software is no longer just following orders. Powered by AI, applications are becoming autonomous agents that decide and act on their own. This shift from passive tool to active participant raises a critical question for every organization: If an agent can act on your behalf, who is in control?

This is the defining challenge for enterprises in the age of AI. AI agents are the fastest-growing identity in business, but they bring a new level of unpredictability. They operate with their own reasoning and inference, capable of acting in ways their human counterparts might not expect.

“When you think about an agent, you’re really dealing with an entity that has its own little brain,” says Harish Peri, Okta’s SVP and General Manager for AI Security. “The only control that our customers have over an agent that might run off and do its own thing is the identity, access, and authorization controls that they can enforce at scale across their enterprise.”

This new reality is why Okta worked closely with customers to develop Okta for AI Agents, extending its identity platform to treat agents as a first-class identity. The goal is to provide the same visibility, access controls, and governance that organizations already use to manage their human workforce.

Building with customers, for customers

The development of Okta for AI Agents has been a deeply collaborative process. As organizations began deploying agents, they turned to Okta for help, providing a rapid feedback loop of listening, learning, and building.

“The whole product development lifecycle is now AI-enabled, which is really compressing how quickly we can iterate and innovate with customers,” says Ely Kahn, Chief Product Officer at Okta. This new, faster cycle of iteration allows Okta to build prototypes, share them with a community of beta customers, and incorporate feedback before a feature is even finalized.

A clear example of this is *agent-to-agent connections (planned to launch soon), which was developed from customer input. “This came directly from one of our beta customers,” Kahn explains. “They’re building a homegrown agent and they want to connect that agent to other agents. Based on their feedback, we quickly built agent-to-agent connections.”

The feature allows an AI agent to be both an identity and a resource that other agents can access, all while maintaining a clear audit trail. This helps ensure that even as agents interact with each other, security teams do not lose sight of who is doing what.

Answering the three foundational questions

Without a unified governance strategy, most organizations struggle to answer three foundational questions:

  • Where are my agents?

  • What can they connect to?

  • What can they do?

Okta for AI Agents is designed to answer these questions from a single control plane. It starts with discovery, helping organizations find every agent in their environment—including unapproved "shadow AI" agents. Once discovered, agents are registered in a central directory, assigned a human owner, and brought under existing identity controls.

From there, the focus shifts to protection and governance. “By getting in the middle, we give our customer security teams control and the ability to kill agent access if something goes wrong,” Peri says.

This means replacing risky, long-lived tokens with secure, scoped access that rotates automatically. It also means enabling agents to operate with the least-privileged access necessary. 

The platform helps ensure that an agent's working permissions reflect both the assigned human's permissions and the permissions specifically given to the agent, Kahn explains. “You can be sure that the agent is working with least-privileged permissions,” he says. “Which frankly, is the most important thing that any developer or IT admin can do to reduce the risk of compromise associated with AI agents.”

Building security for an AI-native world

The pace of AI innovation demands an equally rapid evolution in security. The traditional product development cycle is no longer fast enough to keep up with the threats and opportunities that emerge daily. By building in tandem with customers, security can become an enabler of innovation, not a blocker.

This constant feedback loop is creating a new rhythm for product delivery. “The roadmap has been tightened,” Kahn says. “Instead of quarterly launches, we're having big launches every month.” This agility helps ensure that as AI agents grow more sophisticated, the security and governance controls securing them are evolving right alongside them.

*Any mention in this article of solutions, features, functionalities, certifications, authorizations, or attestations that are not currently generally available or have not yet been obtained may not be delivered or obtained on time or at all. We assume no obligation to deliver on such items and you should not rely on them to make your purchase decisions.

About the Author

Diana Blass

Journalist, Video Producer

Diana Blass is a journalist and video producer specializing in technology storytelling. As the founder of Diana Blass Productions, she creates documentary-style content and educational videos for global brands and media outlets.

Get our Identity newsletter

Okta newsletter image