New research from Software Analyst Cyber Research (SACR) and Stanford Graduate School of Business, makes it clear: AI agent adoption has outpaced the security architectures designed to contain it, especially across enterprise AI programs.

The momentum is real. With over 3 million agents operating globally and enterprises spinning up thousands per week, the security challenge has shifted from whether to deploy agents to how to secure them at runtime—the moment an agent decides to act, calls a tool, and touches enterprise data.

The scale makes manual oversight impossible. Enterprises now run roughly 144 non-human identities for every human user, and when shadow agents and ephemeral instances are included, active identities can reach thousands per team across AI systems and services.

Yet the identity systems managing them were never designed for this. As the researchers conclude: "Traditional identity and access management systems were designed for two primary actors: humans and deterministic machine identities. AI agents fit neither model cleanly".

In working with thousands of organizations deploying AI agents, we've found that getting this right comes down to three questions:

  1. Where are my AI agents?
  2. What can they connect to?
  3. What can they do?

These are the key questions that the Okta blueprint for the secure agentic enterprise enables you to answer. The organizations that have invested in answering all three will be meaningfully better positioned to detect, respond to, and contain the failures that are inevitable at scale.

1. Where are my AI agents?

You need the ability to discover agents no matter where they were built or deployed—across SaaS platforms, browsers, endpoints, and emerging agentic AI ecosystems.

What's actually happening:

  • Organizations are discovering thousands of previously unknown agents in initial scans
  • The fastest-growing category, browser-based and local developer agents (Claude Code, Cursor, Windsurf), is the least visible in enterprise AI workflows
  • Visibility is fragmented across SaaS platforms, endpoints, and emerging agent ecosystems

This isn't a security tooling gap. It's an architectural one.

Why this matters:

Agents aren't deployed like traditional software. They're created everywhere, by anyone, at any time. Visibility isn't a one-time inventory problem; it's a continuous discovery problem.

What leading teams do differently:

  • Aggregate signals across browser, endpoint, SaaS, network, gateway, and MCP layers
  • Register agents as first-class identities at creation—not after the fact
  • Continuously assess agent posture, not just existence

Most organizations don't have an agent inventory.

Leading teams have continuous agent discovery.

2. What can they connect to?

Once an agent exists, its risk isn't defined by what it is—it's defined by everything it can reach.

Agents don't operate in isolation. They connect to SaaS applications, APIs, databases, MCP servers, and other agents, often simultaneously and at machine speed across AI systems.

What's actually happening:

  • Connections, not identities, are defining the true blast radius
  • MCP (Model Context Protocol) is rapidly becoming the execution layer for agents, and according to SACR's research, the ecosystem is currently immature: plaintext credentials are common, OAuth adoption remains limited, and tool poisoning attacks are highly effective 
  • Many organizations lack even a basic inventory of which MCP servers and tools are in use 

These security risks aren't edge cases. This is the baseline.

Why this matters:

A compromised agent doesn't fail gracefully. It moves laterally across systems, chains access across SaaS applications, APIs, and data stores, and operates at machine speed. The blast radius isn't theoretical; it's immediate.

What leading teams do differently:

  • Enforce least-privilege access across every connection path (MCP, SaaS, APIs)
  • Replace static credentials with scoped, short-lived, user-bound access
  • Secure agent-to-agent interactions as rigorously as user access with strong identity verification
  • Log every connection into centralized monitoring and detection systems

Most organizations manage access.

Leading teams control connection paths.

3. What can they do?

This is where the model breaks for most organizations.

Knowing where agents are and what they can connect to isn't enough—because agents don't behave like traditional systems. As Lawrence Pingree, Distinguished Analyst at SACR, notes, "an agent can stay within its permitted access boundaries while still doing something unexpected, harmful, or misaligned with its original intent."

They are non-deterministic, adaptive, and capable of acting in ways that weren't explicitly predefined.

What's actually happening:

  • Agents behave non-deterministically, adapting based on prompts, context, tools, and AI capabilities
  • The same action can be safe in one context and dangerous in another
  • Security decisions are still being made at the access grant, not at execution

This is the core failure—and it calls for agentic security.

Why this matters:

Traditional security asks: "Is this person authorized to run this code?" Runtime identity security for agents asks a harder question: "Should this code run, even if this agent is authorized?"

The shift from access control to intent evaluation and behavioral analysis is what makes deterministic governance alone insufficient.

What leading teams do differently:

  • Enforce real-time, context-aware authorization at execution
  • Evaluate intent, sequence, and behavioral patterns; not just permissions
  • Introduce human-in-the-loop approvals for sensitive actions
  • Continuously monitor for behavioral drift and anomalies
  • Implement kill switches and dynamic escalation to stop risky actions instantly

Most organizations enforce permissions.

Leading teams control behavior in real time.

What CISOs should do now

SACR's research offers clear guidance for security leaders evaluating their approach to agentic AI security:

  1. Start with deterministic governance — but don't stop there. Policy-based access control is the foundation, but it's not sufficient. Build toward behavioral visibility and dynamic enforcement.
  2. Invest in observability before non-deterministic governance. The quality of your security decisions depends on the quality of your data. You can't make good runtime authorization decisions without understanding what agents are actually doing.
  3. Make an explicit decision about non-deterministic governance. Intent-based authorization and dynamic control aren't default next steps — they require architectural investment. Decide now whether you're building toward this or accepting the risk of static controls.
  4. Assess your agent archetypes before selecting a vendor. Homegrown agents, SaaS-embedded agents, and local developer agents have fundamentally different security requirements. Understand your mix before you commit to a platform.
  5. Treat MCP security as a distinct requirement. MCP is becoming the execution layer for agents, not just an integration standard, but a live control surface for agent behavior. If your vendor doesn't have a plan for securing MCP traffic, you have a gap.

How Okta approaches agentic security

SACR's research recognizes that Okta starts from a fundamentally different position than other vendors in this space: as the identity provider already trusted by 19,000 organizations, agent security becomes an extension of existing infrastructure for enterprise AI rather than a new point product.

Okta's approach maps directly to the three questions:

  • Where are my agents? Identity Security Posture Management (ISPM), now generally available, combined with the Secure Access Monitor plugin (in Early Access) that captures browser OAuth grants, Claude Code activity, and MCP server calls.
  • What can they connect to? The Identity Assertion Grant (ID-JAG)—an open standard Okta co-developed—binds agent permissions to the specific user's existing access rights, with three-tier authorization spanning user context, OAuth scopes, and fine-grained policy via fga.dev.
  • What can they do? CIBA-based human-in-the-loop approval workflows, global token revocation, full audit trails, and upcoming per-agent hard-stop capabilities.

As the report concludes, Okta's strongest differentiation is consolidation—a single control plane that eliminates tool sprawl across identity types and covers the full spectrum from enterprise API-accessing agents through to developer-workstation MCP clients.

The organizations that understand this early aren't just deploying agents. They're building the systems to:

  • Discover them
  • Control their reach
  • And intervene in their behavior

Because at scale, the question isn't whether agents have access. It's whether you can see what they're doing — and stop it when it matters.

 

Additional resources:

Read the full SACR/Stanford report

Explore the Okta AI Blueprint

Audit Your AI Identity Standards

Continue your Identity journey