This is the second blog in a seven-part series on identity security as AI security.

TL;DR: A silent breach rippled through the SaaS world in August 2025: demanding no ransomware demand, no splashy defacement. Just stolen credentials, quietly forgotten and dangerously alive. The target was Salesloft Drift, a marketing automation platform that connects the Drift AI chat agent with a Salesforce or Google Workspace instance, among others. Attackers didn’t need brute force; they used OAuth tokens (digital keys issued months earlier) to infiltrate over 700 organizations. The fallout was massive: business contacts, Salesforce data, and internal API keys were siphoned off. It was one of the largest SaaS-to-SaaS breaches to date and spotlighted a deeper issue in identity management. AI agents don’t log off, but their credentials often persist for months, forgotten and unrevoked. These dormant tokens become ticking time bombs.

The solution starts with reconceptualizing identity and access. Credentials should be short-lived, automatically renewed, and revoked the moment they’re no longer needed. Durable authorization must come with built-in expiration, not just convenience.

The incident that proves the problem

In August 2025, one of the most far-reaching SaaS breaches in recent memory unfolded. Salesloft Drift was breached without any exploit code, zero-day vulnerabilities, or malware. The attackers didn’t need them. All they needed was time and tokens. The attackers first gained access to Salesloft's GitHub account between March and June 2025. Then, they planted malicious workflows and accessed Drift’s AWS environment. Once inside, they stole OAuth tokens for Drift customers’ technology integrations. More than 700 organizations were exposed.

But the real vulnerability wasn’t the theft; it was the longevity. These tokens, many of which had been issued months before, were still active. They hadn’t expired or been revoked. So when threat actors began using them in August to access data from connected services like Salesforce, Cloudflare, Palo Alto Networks, and Zscaler, the activity appeared legitimate. The tokens were valid. The services trusted them. The automation looked normal.

There was no need to break in, because the doors had never been locked.

This episode is a textbook case of "authorization drift" - a growing security risk where machine credentials outlive the workflows and business intents they were created for. And it’s not surprising, considering that 51% of organizations still don’t have a formal process for revoking these long-lived secrets. With non-human identities now outnumbering humans 144 to 1, that oversight is a major chasm in the system.

Solving the problem requires rethinking how we authorize machine-to-machine activity. Static credentials should be replaced with short-lived tokens that renew only when context allows. Access controls must ask not just “is this token valid?” but “should it still be used?” And in fast-moving distributed systems, authorization must adjust continuously to stay in sync with intent. to stay aligned with intent.

The Salesloft Drift breach didn’t exploit a weakness in code. It exploited a weakness in assumptions, that access, once granted, would be responsibly managed. Unless credential governance evolves, attackers won’t need new tactics. They’ll just wait.

The authorization lifecycle problem

AI agents aren’t users in the traditional sense. They don’t log in, complete a task, and log out. They run continuously for days, even weeks, executing long workflows like data reconciliation, onboarding, or model training. And yet, most identity systems still operate on human assumptions: sessions with a start and end, credentials issued once and forgotten.

That model breaks when applied to autonomous systems. Beyond managing user logins IAM is moving toward real‑time trust management across humans, services, and agents that act on our behalf, across systems, without direct oversight. The OpenID Foundation calls this “asynchronous execution with durable delegated authority”:  agents operating independently under governed, revocable identities. 

In practical terms, it means rethinking the fundamentals:

  • Delegated identities that are purpose-built for agents, separate from user credentials.

  • Continuously renewable access, where permission adjusts dynamically to context.

  • Instant de-provisioning across all systems when risk surfaces—no delay, no manual cleanup.

  • Real-time checks that validate intent at the moment of action, not just at the time of token issuance.

This is where AI Agent security is headed: identity as a dynamic trust layer for AI.

And If prevention isn't enough motivation, regulation Is coming

Beginning August 2, 2026, enforcement of Article 14 of the EU AI Act will require organizations to prove that every AI-driven action was authorized at the time it occurred, not just when credentials were issued. This shift toward execution-time accountability carries real weight: violations could cost up to €35 million ($38 million) or 7% of global revenue.

The U.S. is moving in the same direction. Frameworks like Federal identity, credibility, and access management (FICAM) and the Department of Justice’s Data Security Program Rule are starting to demand lifecycle control over non-human identities and automated access.

The old way of thinking with the assumption that “the token was valid” no longer holds up.

Auditors are asking tougher questions now:
“This agent accessed customer data on Day 45. The employee left on Day 30. The task ended on Day 10. Show me the authorization trail.”

A weak system might reply:
“The OAuth token was valid for 90 days.” That’s not good enough anymore.

A mature, lifecycle-aware system responds differently:
“The agent operated under a unique, delegated identity. The token was auto-revoked when the task completed on Day 10. All access was logged, the delegation chain verified, and de-provisioning occurred within seconds.”

The cost of authorization drift

Authorization drift is the silent threat most teams don’t see coming. It’s the gap between when something should lose access and when it actually does. According to OWASP’s NHI7 report, credentials stay active an average of 47 days after they’re no longer needed. That’s nearly two months where attackers (or unforeseen mistakes) have free rein. 

Non-human identities now outnumber human ones by 144 to 1 in some enterprises. Every lingering token tied to an AI agent opens the door to unintended access, long after the task is done, the employee is gone, or the integration has shifted.

This is why lifecycle-aware authorization is becoming essential. It helps ensure access isn’t just granted and forgotten, it adapts. Credentials auto-expire when workflows complete. Access renews only if the context still makes sense. It’s dynamic trust, not static permission.

Without it, the stakes are very high. IBM’s 2025 Cost of a Data Breach Report estimates the global average breach now costs $4.4 million. Unless access is tied to intent and intent is monitored in real time “durable authorization” becomes just another term for hidden risk.

From static to lifecycle authorization

To shut down authorization drift, AI agents need credentials that adapt to the real-time conditions they operate in. Most security tools weren’t built for agents that work autonomously for weeks. 

Fixing that starts with lifecycle-aware authorization:  it verifies credentials expire, renew, or get revoked based on context, not just a preset timer. Four key design principles make this work:

  • Durable Delegated Identity: Every AI agent has its own identity, separate from users, governed, and auditable.

  • Continuously Renewable Authorization: Access adjusts automatically as the task, user, or environment changes.

  • Instant Cross-System De-Provisioning: Revoking access in one place shuts it down everywhere, fast.

  • Real-Time Authorization Validation: Actions get re-checked against current policies at the moment they happen, not just when credentials were issued.

Okta’s AI Agent Lifecycle Management (LCM) framework puts these ideas into practice. It handles identity creation, ongoing authorization checks, and automated de-provisioning for AI systems, supporting safer, compliant operations as AI takes on more responsibility. As regulatory scrutiny tightens and agents become more autonomous, this approach is fast becoming a requirement, not a luxury.

How Okta and Auth0 make it real

Lifecycle-aware authorization is not a replacement for IAM; it extends it to meet the needs of modern, autonomous systems. As AI agents take on more responsibilities across systems, credentials need to follow the same rules as human access: valid only when necessary, and revoked the moment they’re not.

1. Durable Delegated Identity — Okta’s AI Agent Lifecycle Management

As part of the identity security fabric, Okta's AI agent lifecycle management registers AI agents as distinct and governs these identities with clear delegation chains. Not as user stand-ins, but as managed non-human actors with policies tailored to their roles. Through Okta Identity Governance and Privileged Access, agents are granted only what they need, when they need it, and stripped of access the moment that context ends.

2. Contextual Authorization — Auth0 Token Vault + FGA

Auth0 Token Vault issues credentials that are short-lived and tied to specific tasks, minimizing token drift and persistence. Auth0 Fine-Grained Authorization (FGA) adds dynamic, context-aware decisioning at each API call. For asynchronous actions, Auth0’s Client-Initiated Backchannel Authentication (CIBA) frameworks check live delegation before execution, not just at credential issuance. 

3. Continuous Revocation and Audit Visibility — Okta Identity Security Fabric

When access is revoked, it has to take effect everywhere, instantly otherwise the potential for exploitation of that access remains. Okta’s Identity Security Fabric enforces shared-signal revocation and open standards such as DPoP (RFC 9449), helping ensure revoked credentials propagate instantly across SaaS ecosystems. Sub-second propagation helps ensure no stale token lingers, and every decision is logged for full audit visibility.

The bottom line

AI agents now vastly outnumber their human counterparts, by as much as 144 to 1 in some organizations! But with that scale comes a dangerous gap.

The Salesloft–Drift breach, along with NIST’s 2025 Agent Hijacking study, lay bare a troubling pattern: valid credentials hanging around long after they should’ve been revoked. The business need had ended. The users were gone. But the access stayed.

The problem isn’t that AI agents are too powerful; it’s that our systems for managing their access haven’t caught up. They were built for people who log in and out, not autonomous code that runs for weeks, quietly performing sensitive tasks

Okta’s Identity Security Fabric and Auth0’s adaptive authorization stack are carving a new path. Their evolving frameworks show what the future looks like: AI agents with their own identities, not borrowed credentials, governed by policies that continuously adapt to real-time context. Access isn’t just granted at the start of a job; it’s constantly re-evaluated. And when a task ends or a condition changes, access disappears instantly. No waiting, no manual cleanup.

In other words, this is a paradigm shift:

  • Shared credentials are replaced by delegated, agent-specific identities.

  • Static, long-lived tokens give way to credentials that renew or expire with context.

  • Arbitrary time limits are replaced with revocation tied to business logic.

  • Manual de-provisioning is swapped for instant, automated cutoffs across systems.

This is more than an upgrade to IAM, it’s a revolution. Access is now dynamic, responsive to workflows, roles, and tasks. Credentials adapt to what’s happening in real time and vanish when their job is done. 

No more lingering tokens. No more silent exposure. Just identity as a living trust layer for autonomous systems. Anything less leaves organizations exposed to breaches born not from malice, but from inertia.

Next: Blog 3 dives into cross-domain federation. How AI agents can prove delegated authority across multiple organizations, even when there's no single source of truth to rely on.

Continue your Identity journey