A question is landing in the inboxes of nearly every CISO and CIO right now.
"This AI agent stuff looks great. But how many identity admins am I going to need to hire to manage all of it?"
It is a completely reasonable question. It is also, if taken literally, the wrong one, because the organizations that approach AI agent governance as a staffing problem will lose. Not because they can't hire fast enough (even if they could), but because no amount of human administration solves a problem that is fundamentally architectural in nature.
The right question is not how many people you need to manage AI agents; it’s what kind of foundation you need to build so that the answer stays manageable as your agent fleet grows from dozens to thousands, and beyond. That foundation starts with identity, and the window to build it is right now.
Why the AI agent identity problem is different from any IAM problem you’ve solved before
AI agents break every assumption and rule that traditional identity and access management (IAM) was built on. Traditional rules like:
- Identity scales with our headcount or customer acquisition. Human IAM grew as you hired employees or acquired customers. Agent IAM scales with deployment pipelines. A single developer can deploy hundreds of agents before lunch. Your provisioning processes were not designed for this velocity, and neither is a team of identity admins working from a ticketing queue.
- Identities are known before they're needed. Traditional provisioning assumes that someone submits a request, an identity is created, and access is granted. With agents, the deployment may already be in production before your security team knows it exists. You cannot staff your way to visibility you were never given.
- Principals behave predictably. A service account calls the same three APIs in the same sequence every time. An agent makes autonomous decisions about what to access based on its context and instructions. The attack surface is no longer just the credentials it holds; it's also the decisions that it’s making.
- Identity chains are shallow. IAM was built for a user-to-app or user-to-system model. Simple. Agents introduce completely new models, such as agent-to-agent, agent-to-tool, or agent-to-agent-to-app. This creates complex webs of access. Webs that an admin reviewing a provisioning ticket has no way to evaluate the risk of a chain they can't see.
AI agents systematically break the assumptions traditional identity and access management was built on, requiring an architectural shift in governance
These deviations from traditional IAM rules explain why the answer to the hiring question isn't more administrators, it's a better architecture.
If it acts, it needs an identity
Before we get to the architecture, there’s one key principle that everything else is built on and organizations that skip it create a problem no amount of staffing or tooling can fix later:
Every AI agent operating in your environment needs its own identity. Full stop, no exceptions.
Not a shared credential. Not a borrowed service account. Not something that gets sorted out after deployment. It's own identity that’s provisioned before it ever touches a production system.
What makes up a complete agent identity?
- A unique, verifiable identifier
- A defined and documented scope of permitted actions
- A named human owner who is accountable for it
- An audit record of what it does
- A clear path to suspension and revocation
What happens without it? We know that shadow IT can lead to data being stored in unauthorized locations. Now, Shadow AI can lead to autonomous action in unauthorized contexts. You can’t audit what you don't know exists. You cannot right-size permissions you've never reviewed. You cannot revoke access for an agent that was never formally provisioned. And you absolutely cannot staff a team large enough to chase down agents that were never in the system to begin with.
Every agent needs a human thread
When an agent causes an incident, who do you call?
If you can't answer that, you have a governance problem that no headcount can solve. Maintaining that human thread requires two concepts working together.
- Human ownership. Every agent needs a named, accountable owner inside your organization. Not the developer who wrote it three months ago and has since moved to another team. Not the vendor who supplied the underlying model. A person or team whose current responsibilities include approving the agent's scope, being notified when it behaves anomalously, and being accountable when it causes harm. This is not an administrative burden that requires a dedicated hire; it is a governance requirement that gets baked into how agents are deployed.
- Delegation and the authority chain. When an agent takes an action, one of two things should be true: it's acting within its own explicitly defined permissions, or it's acting on authority that was explicitly delegated to it by a human. Both are legitimate. Both need to be traceable.
However, in multi-agent systems, this chain can get complex fast:
A human approves a workflow → an orchestrator agent acts on that approval → a sub-agent is invoked with delegated scope → an API call reaches a sensitive system.
Every action in a multi-agent workflow must trace back to the originating human authority. Without the chain, incident response starts from zero.
Every hop in that chain needs to trace back to the originating human. Not only because regulators require it (because they will) but because when something goes wrong, you need to reconstruct exactly what happened, why the agent believed it was authorized, and where the breakdown occurred. Without the chain, you have an API call from an anonymous process, and your incident response team starts from zero.
Ownership answers who is responsible. Delegation answers whose authority is being used. Together, they help ensure that no agent operates without a human being accountable for it. Neither concept requires an army of admins to enforce if it's built into the foundation correctly.
Connecting builders and guardians
There are two teams at the center of this problem, with different incentives and, often times, limited overlap.
- The builders are moving fast, measured on shipping, solving business problems with new, impressive features. Identity governance is not their primary concern - shipping cool stuff fast is.
- Security, IAM, and compliance teams are owning the risk posture. They often learn about new agents after deployment, sometimes long after. They're responsible for the risk they usually had no part in creating.
This tension isn't new. It played out with cloud, with mobile, with shadow IT. But the stakes are higher with agents. A misconfigured cloud storage bucket can expose data passively. A misconfigured agent can take actions actively, and the blast radius is completely different.
The wrong response from security teams is to become a bottleneck, requiring manual review for every agent deployment. Beyond slowing everything down, this approach has a 100% historical failure rate. Developers route around it, and you end up with ungoverned agents instead of governed ones. It also, incidentally, is exactly the model that requires hiring more administrators to keep up with deployment velocity.
The right response is shared ownership with automated enforcement: security teams define the policies and guardrails in advance; builder teams declare scope and ownership at deployment time; the pipeline handles provisioning automatically. Security gets visibility without being on the critical path. Builders get speed without bypassing controls. And your identity team spends its time designing governance frameworks instead of processing tickets.
What the foundation for AI agent identity security actually looks like
So let’s get back to our original question: "This AI agent stuff looks great. But how many identity admins am I going to need to hire to manage all of it?"
Answer: You don't need to scale your identity team linearly with your agent fleet. You need to build a foundation that scales without them.
That foundation has four components.
- Agent identity as code. Agent identity should not be a manual provisioning step. It should be a deployment artifact - defined in a manifest alongside the agent's code, reviewed as part of the development process, provisioned automatically when the agent ships. Security review happens during code review, before production, not as a separate administrative step afterward.
- Class-based Governance. You will never directly manage millions of agent instances. You will manage dozens of agent classes. A class defines permitted systems and data access, maximum data classification ceiling, when human escalation is required, and audit requirements. Individual agents inherit from classes automatically. Your governance scales with the number of classes you maintain, not with the number of instances running at any given moment.
- Zero standing privilege. Agents should hold no persistent permissions. Every action requires a scoped, time-limited credential for a specific task. When the task ends, the credential expires. This doesn't just reduce your attack surface; it also makes the scale problem tractable. There is no entitlement sprawl accumulating across thousands of agents that someone has to review and clean up. Every credential issuance is a governance event with an automatic record.
- The delegation Chain as infrastructure. Multi-agent workflows need to treat delegation chains as a first-class architectural concern rather than an afterthought. When this is technically enforced infrastructure, you get traceability automatically. When it's a design principle that someone has to manually verify, it becomes a full-time job.
The foundation that scales AI agent governance without scaling your identity team linearly with your agent fleet.
Identity has never been set it and forget it, AI agents won’t change that
Provisioning an agent identity at deploy time is the beginning of governance, not the end. But here too, the answer is automation and discipline, not headcount.
- Access review cycles need to include agents. Is this agent still active? Is its scope still appropriate? Has the business context changed? Automate the data collection. Have humans make the decisions on the exceptions. Don't create a process that requires someone to manually review thousands of agents quarterly - design one that surfaces only what requires human judgment.
- Right-size based on actual behavior. Agents are frequently provisioned with a broader scope than they end up using. Usage data tells you what an agent actually accesses versus what it's permitted to access. Let the system identify the gap. Have humans approve the reduction. This is how you prevent the kind of permission sprawl that eventually does require significant administrative effort to untangle.
- Decommission is a first-class operation. When a workflow is retired, its agents should be retired automatically. Zombie agents, deployed and forgotten but still credentialed, are among the highest-risk patterns in agentic environments. They are also the direct result of treating decommission as a manual process that someone gets around to eventually.
The Answer to the Question
So how many identity admins do you need to hire to manage thousands of AI agents?
If you build the foundation correctly, the answer is: not as many as you think, and far fewer than you'd need without it.
What you actually need is a small number of people who can design policy frameworks, build pipeline integrations, interpret behavioral signals at scale, and hold builder teams accountable to governance standards.
What you don't need (and what won't work regardless) is a new team of people manually provisioning, reviewing, and decommissioning agent identities from a ticketing queue. That model collapses under its own weight before you reach a hundred agents, let alone a thousand.
The organizations that govern AI agents well at scale won't be the ones that hired fastest. They'll be the ones who built the right foundation earliest, while the fleet was still small enough to shape, and before the hiring question became impossible to answer.
Five things to do before your fleet of agents grows:
- Declare the principle internally - every agent gets an identity, no exceptions, starting now
- Inventory what's already running, including what you didn't formally deploy
- Require human ownership assignment for every agent currently in production
- Define your agent classification tiers before you need them at scale
- Make identity a deployment requirement, not a post-deployment review
Actionable steps security and IT leaders should take now, while the agent fleet is still small enough to shape.
The agent explosion isn't a future problem. Agents are now arriving in production environments at organizations that haven't finished deciding how to govern them.
The window to build this right is open. And the answer was never more people, it was always better architecture.
Building your AI agent governance foundation? Okta bridges the gap by enabling builders to deploy faster and giving security teams governance that scales. Learn more here.