This is the seventh and final blog in a seven-part series on identity security as AI security.
A Replit coding agent erased 1,206 customer records in seconds. In the Salesloft Drift breach, OAuth tokens sat active for months after workflows ended, compromising 700+ organizations. A breach crossed four trust domains before anyone noticed. In Unit 42’s Agent Session Smuggling disclosure, a sub-agent embedded a silent stock trade inside a routine response. Chinese state actors weaponized Claude Code for the first documented large-scale autonomous cyberattack, targeting chemical manufacturing among other sectors. Four CVSS 9.3+ vulnerabilities hit Anthropic MCP, Microsoft Copilot, ServiceNow, and Salesforce, all exposing the same gap: agents that retrieve data under one user’s permissions and broadcast it to audiences that should never see it.
Six different failures. One root cause. Identity and authorization systems that still treat agents like users.
Among the 3,235 enterprise leaders surveyed in Deloitte’s 2026 State of AI report, 73% cite data privacy and security as their top AI risk, yet only 21% have a mature governance model for autonomous agents. Shadow AI adds $670,000 to average breach costs.
The answer is not another security layer bolted on after deployment. It is identity and authorization rebuilt as the operating system for autonomous trust.
The Pattern Hiding in Plain Sight
Read individually, each failure in this series looks like a specific gap. An expired token here, a missing scope check there. Read together, a different picture emerges.
A Replit agent deleted 1,206 records in seconds. At 5,000 operations per minute, per-action consent collapsed into consent fatigue. The fix: CIBA for async approval, Cross App Access for continuous authorization, and Token Vault for short-lived credentials.
The Salesloft Drift breach exploited OAuth tokens that sat active months past business justification. With non-human identities outnumbering humans 144 to 1, durable delegated authority without lifecycle controls is the default vulnerability. Token Vault with task-scoped auto-refresh, Identity Governance for cross-system de-provisioning, and ISPM for orphan detection can help close this gap.
Then it got worse. A breach crossed four trust domains because no interoperable trust fabric could verify and revoke access across all of them in real time. Cross App Access with ID-JAG lineage, federated signals through IPSIE, and Universal Logout could have solved that.
Delegation chains became attack channels. Unit 42’s Agent Session Smuggling, Rehberger’s Cross-Agent Privilege Escalation, and EchoLeak (CVE-2025-32711, CVSS 9.3) all exploited the same gap: no scope attenuation across multi-hop delegation chains. Cross App Access with ID-JAG attestation chains and Token Vault for cryptographic proof of origin can address that.
Authorization became a safety case. Chinese state actors weaponized Claude Code for what Anthropic described as the first documented large-scale autonomous cyberattack. A poisoned calendar invite hijacked Gemini to control smart home actuators. JLR’s credential compromise shut down factories for five weeks at £1.9 billion. CIBA for human-in-the-loop and FGA for real-time authorization are the relevant controls that could have prevented or mitigated this attack.
Finally, four CVSS 9.3+ vulnerabilities hit Anthropic MCP, Microsoft Copilot, ServiceNow, and Salesforce. Same pattern: agents acting on behalf of multiple users with no method for enforcing permission intersections across the output channel. FGA with batchCheck intersection, Token Vault with audience-scoped credentials, and Identity Governance could have fixed it.
Strip away the details and the pattern is obvious. Every agent security problem is an identity and authorization problem wearing a different mask. The OpenID Foundation’s Identity Management for Agentic AI whitepaper established the core use cases for agent authorization challenges. This series mapped those challenges to real-world breaches and production security architecture.
Six Failure Modes, One Root Cause - Identity & Authorization: The Operating System for AI Security (Source: Okta, Inc.)
Why Six Tools Cannot Do the Job of One Layer
Most organizations are approaching agent security the way they approached cloud a decade ago. Token vault here, API gateway there, governance dashboard somewhere else. Each tool solves its own problem. None solves the system.
The gap between deployment and governance is stark. 91% of organizations already use AI agents, but only 10% have a strategy for managing non-human identities. 80% have already encountered risky agent behaviors, per McKinsey. The gap is not awareness. It is architecture.
Now picture the compound failure. Agent Session Smuggling (Blog 4) crosses trust domains (Blog 3) in a long-running workflow with drifting credentials (Blog 2) serving a shared channel with mixed permissions (Blog 6) at 5,000 operations per minute (Blog 1) while controlling physical infrastructure (Blog 5). Six tools that share no context cannot secure interconnected failures. No token vault sees the delegation lineage. No API gateway knows the original human intent. No governance dashboard tracks scope expansion across hops.
The real risk is not shadow AI. It is sanctioned AI with no identity. Shadow agents at least trigger alarms. The agents you officially deployed, connected to production systems, running on shared credentials with no lifecycle management? Those will breach you.
What Happens When Identity and Authorization Become the Substrate
The solution is not more tools. It is one layer that all tools share. Agents need to stop being treated like users and start being treated as first-class principals in IAM infrastructure. Not as a best practice. As an architectural requirement.
When every agent action flows through identity and authorization, five properties take hold:
- Provenance. Every action traces to an accountable human: through which delegation chain, under what policy, with what scope. This is what broke in Blogs 1 and 4. Cross App Access and Token Vault encode delegation lineage and cryptographic proof of origin in the token payload.
- Attenuation. When a primary agent delegates to a sub-agent, scope decreases, never increases. Agent Session Smuggling and EchoLeak both exploited the absence of this constraint. XAA enforces it structurally through the Identity Assertion JWT Authorization Grant (ID-JAG), not through policy alone.
- Continuous evaluation. Context shifts, risk changes, intent expires. The Drift breach persisted because authorization was checked once at token issuance and never again. Auth0 Fine-Grained Authorization checks permissions at the moment of action.
- Lifecycle governance. Employee leaves, their agents get revoked. Workflow completes, credentials expire. The JLR breach turned from an intrusion into a five-week shutdown because none of this happened. ISPM discovers every agent in your environment and assesses its risk.
- Interoperability. XAA, now part of MCP as “Enterprise-Managed Authorization,” standardizes secure agent connections across the domain boundaries that Blog 3 exposed. Every solution maps to open standards: OAuth 2.1, OIDC, RFC 8693, CIBA, SCIM, Shared Signals Framework, ID-JAG. The alternative is vendor lock-in at the identity layer, the one lock-in you cannot afford.
These are not features to evaluate on a comparison chart. They are architectural outcomes you get when identity and authorization become the substrate rather than an afterthought.
Discover, Onboard, Protect, Govern
Okta for AI Agents brings this architecture to life through four stages. Each maps directly to the failures above:
- Discover. ISPM scans agent platforms (Bedrock, Copilot Studio, Vertex AI), detects shadow agents, identifies NHIs, and flags excessive permissions. You cannot govern what you cannot see.
- Onboard. Universal Directory and AI Agent Directory give every agent first-class identity. Lifecycle Management handles request, approval, certification, and deprovision.
- Protect. OAuth authorization servers govern scopes and claims. XAA handles cross-domain delegation. Token Vault delivers secretless credentials. Auth0 FGA enforces runtime permissions at every action.
- Govern. CIBA enables crypto-bound human consent. Identity Governance runs access reviews and recertification. Universal Logout via IPSIE delivers sub-second revocation. Telemetry captures agent actions for regulatory evidence.
Okta for AI Agents: The Operating System for AI Security (Source: Okta, Inc.)
Three Questions Before Your Next Audit
- Can you trace every agent action to its authorizing human? Not which agent. Who, under what policy, with what scope. If not, start with XAA and ID-JAG for delegation lineage.
- Do credentials expire when the work does? The Drift breach started with tokens that should have been revoked months earlier. Implement Token Vault with automatic expiration.
- Do multi-user agents enforce permission intersection? If your agents operate under the broadest individual scope instead of the narrowest overlap, data exposure is a when, not an if. Deploy FGA with intersection-based policies.
The Cost of Waiting
The regulatory walls are closing in from three directions, and the penalties are not hypothetical.
The EU AI Act’s high-risk system requirements take full effect in August 2026. Article 14 demands demonstrable human oversight of autonomous systems. Article 99 sets fines of up to 35 million euros or 7% of global annual turnover. Without verifiable delegation chains and audit trails, compliance is structurally impossible.
In the US, the SEC’s Cyber and Emerging Technologies Unit (CETU), launched February 2025, explicitly targets fraudulent cybersecurity disclosure and AI-enabled fraud. The cybersecurity disclosure rule requires reporting material cyber incidents within four business days. An agent-driven breach you cannot explain because no delegation lineage exists is not just a security failure. It is a disclosure problem.
And GDPR has generated 7.1 billion euros in cumulative fines since 2018, with enforcement now extending to AI data processing. When agents retrieve and surface personal data across shared workspaces (Blog 6’s exact failure mode), every uncontrolled exposure is a potential violation.
Anthropic’s 2026 Agentic Coding Trends Report identifies “embedding security architecture from the earliest stages” as a top priority. That is the industry signaling what regulators will soon require.
Picture six months of inaction. Orphaned credentials accumulate. Delegation chains grow undocumented. Scope creep compounds across sub-agents. Then the audit comes, or the breach, and you are doing forensic archaeology through decisions no one recorded. Remediation dwarfs prevention. Regulatory penalties dwarf both.
Okta for AI Agents: Discover, Onboard, Protect, Govern.
Agent security is identity and authorization security. There is no other kind.
Read the full series: AI Agent Security: Rebuilding IAM for Autonomous Trust
Learn more: okta.com/solutions/secure-ai | auth0.com/ai