How are regulated industries handling AI agent security?

Updated: 2026 April 27 Time to read: ~

To satisfy regulatory requirements, regulated industries need to adopt non-human identity (NHI) governance, treating AI agents as first-class identities subject to the same Zero Trust, fine-grained authorization (FGA), and audit controls as human users.

As AI agents evolve from assistants to autonomous actors, regulated industries are shifting from application security toward NHI governance. This framework aligns with Zero Trust, FGA, and traceable intent to help organizations meet the rigorous record-keeping and accountability mandates of HIPAA and the EU AI Act.

Regulated industries face a fundamental problem. AI agents may operate with varying degrees of autonomy, access sensitive data, and make decisions without explicit human approval for each step, depending on design and governance controls. Yet HIPAA, GDPR, and the EU AI Act establish specific requirements for auditability, accountability, and documented controls over data access and decision-making. Financial regulations, such as Dodd-Frank, mandate risk management and institutional accountability. An AI agent without a verifiable identity makes it difficult to satisfy these requirements for auditability and accountability. Zero Trust architecture with continuous verification of agent identity, human authorization, and action intent helps support the compliant deployment of AI agents in regulated environments.

As financial services and healthcare organizations deploy AI agents to manage transactions, access patient data, and make autonomous decisions, they are creating a new class of NHI that extends beyond what traditional identity and access management (IAM) systems were optimized to handle. Treating AI agents as first-class identities subject to the same governance rigor as human users is becoming the baseline for deployment.

The authority gap

An AI agent is a digital entity capable of autonomous action. Unlike API integrations that follow predefined workflows, agents can reason, make decisions, and act with varying levels of independence. A human analyst authorizes an AI agent to “review high-risk transactions and flag suspicious activity.” The agent may begin canceling transactions it deems risky, perhaps reasoning that this prevents fraud. Or it escalates data to external systems without verifying authorization. These actions are illustrative of unauthorized capability expansion, in which the agent bypassed its intended human-in-the-loop (HITL) constraint and evolved from a recommender to an autonomous executor.

Uncontrolled emergent behaviors resulting from autonomous reasoning should be unacceptable in any environment, even more so in regulated environments. Organizations should take steps now to extend identity governance to agents by defining the agent’s identity, the authorizing party, explicit operational limits, and the context required to verify that every action aligns with the original human intent.

Organizations must govern AI agents as first-class identities, moving beyond the limited scope of standard software applications.

What’s the difference between traditional security and AI agent governance (2026)?*

Control Layer

Traditional IAM

AI Agent Identity (NHI)

Regulatory Requirement

Authentication

Static (API Keys/PWs)

Signed identity assertions (OIDC / workload identity federation)

NIST SP 800-207 (Zero Trust architecture)

Authorization

Role-based (RBAC)

Fine-grained (FGA/ABAC)

HIPAA (minimum necessary) / Dodd-Frank (risk controls)

Human oversight

Administrative approval

Human-in-the-loop (HITL)

NIST AI RMF (govern/manage functions)

Audit detail

Access logs (who/when)

Action-level provenance with human authorization context

GDPR / SEC market surveillance rules

Trust model

Perimeter-based / Implicit trust

Continuous Zero Trust

NIST SP 800-207

Visibility

Managed assets

Shadow AI discovery

EU AI Act (transparency and record-keeping requirements)

*Regulatory requirements are outcome-based and do not prescribe specific technical implementations. This table maps regulatory intent to practical identity controls commonly used to achieve compliance with HIPAA, GDPR, Dodd-Frank, NIST, and EU AI Act requirements for AI agent governance.

Zero Trust for agents

Zero Trust means never assuming trust based on location, time, or prior verification, and verifying continuously.

Applying Zero Trust to agents requires confirming three things at every action:

  1. Agent identity: Cryptographic proof of origin through signed tokens (e.g., OAuth 2.0 access tokens, OIDC ID tokens, or workload identities). Short-lived tokens reduce replay and impersonation risks inherent in long-lived API keys, particularly when combined with proper token validation and audience binding.
  2. Human authorization context: Was the human who initiated the agent truly authorized? The agent acts on behalf of the human, and the authorization context travels with each request. Systems must continuously evaluate whether the authorization remains valid, whether the role permits the request, and whether the business context remains active.
  3. Action intent: Does the action match the agent’s authorized scope? A financial agent authorized to review transactions should not initiate transfers. A healthcare agent authorized to read intake forms should not modify treatment records or access restricted data.

Continuous verification of these three elements at the transaction level (rather than once at instantiation) underpins Zero Trust for agentic systems.

NIST AI RMF

The NIST Artificial Intelligence Risk Management Framework (AI RMF) provides the structural basis for agent governance. Industry alignment on agent governance is accelerating as the focus shifts from “models as tools” to “agents as actors” that affect external states. Implementing Zero Trust through identity governance is a practical control mechanism to support mapping, measuring, and managing the risks associated with this autonomy.

Map: Inventory all AI systems and document the scope. This requires treating agents as identities, registering them in a centralized system, documenting capabilities, and maintaining discoverable inventory. Shadow AI, where agents operate outside IT visibility, becomes detectable when identity governance is in place.

Measure: Assess whether AI systems operate within intended boundaries. Authenticated identities allow security teams to establish baselines, detect behavioral anomalies, and verify that actions align with scope.

Manage: Implement controls to reduce risk. For agents, this means enforcing least privilege (agents have only current-task access), just-in-time access (permissions granted only when needed and revoked immediately after), and HITL controls (human approval for high-risk decisions).

Govern: Establish accountability via attribution by connecting every agent action to a specific agent identity and the human who authorized it. Audit trails must show what the agent did, why it was authorized, and whether the action was within scope.

The three core risks

According to Gartner, by 2030, 50% of AI agent deployment failures will be due to insufficient AI governance platform runtime enforcement for capabilities and multisystem interoperability. Identity governance is a crucial component of this enforcement, and can help organizations prevent unauthorized agent actions and maintain audit trails for regulatory compliance.

Shadow AI: Agents operating outside IT visibility and control. A data analyst deploys a custom agent to automate reporting, connecting it to production databases without approval. A support team uses an unapproved LLM to draft responses, exposing confidential data. In regulated environments, Shadow AI can pose significant compliance risks and lead to potential violations. If an agent accesses protected health information (PHI) without appropriate technical and organizational safeguards, the organization may violate HIPAA requirements. Shadow AI can also create unmanaged NHIs with credentials outside lifecycle management, accumulating excessive permissions and becoming lateral movement vectors.

Data leakage: Agents may access data beyond their authorized scope through over-permissioned credentials (broad system access) or inadequate scope enforcement (task-specific data restrictions). An agent authorized to “access patient records” might receive read access to entire databases. If the immediate task is to verify intake forms for one clinic, broad access violates least privilege. When integrated with downstream applications, FGA enables enforcement of task-specific boundaries, restricting an agent’s access to the exact records and actions required for a transaction, rather than granting broad, persistent permissions.

Privilege escalation: Agents may accumulate permissions over time or chain legitimate permissions in unauthorized ways. Escalation occurs when agents inherit user permissions rather than receive explicit, scoped credentials, or when credentials are never rotated. Preventing escalation requires that every agent have a distinct identity, receive only the permissions required for the current task, and have those permissions revoked immediately upon task completion.

Least privilege and fine-grained authorization

Least privilege means granting every identity only the permissions required for a specific function, for the shortest duration. For agents, this must be more granular and dynamic than traditional role-based access control (RBAC).

An agent’s access requirements vary by task. FGA and attribute-based access control (ABAC) can help enable this. Rather than assigning permanent roles, these approaches evaluate each request in context. Access is granted only if the agent’s identity is verified, human authorization is active, the requested action falls within scope, and risk signals do not indicate compromise.

Short-lived tokens issued by a centralized identity provider can embed authorization context and delegation chains. When tokens expire (minutes to hours), the agent re-authenticates and receives a fresh token reflecting the current authorization state.

Traceable intent

Regulated industries require auditability and attribution. Every action must be traceable to a responsible actor and supported by sufficient context to justify the action.

For AI agents, implementing “traceable intent” as an architectural pattern requires logging each action with agent identity, human authorization context (who initiated the agent, what was their authorization?), task context (what business purpose justified this?), specific scope (which data, systems, actions?), the action taken, and the outcome.

Establishing traceable intent creates a complete audit trail with full attribution. Auditors can then reconstruct not only the specific action but also the underlying authorization and the human context behind the request.

Implementing traceable intent for AI agents requires several core technical controls:

  • Centralized agent registration: Maintaining a verifiable inventory of all active NHIs
  • Delegated authorization: Using standards-based patterns, such as OAuth 2.0 token exchange (RFC 8693), to map agent actions back to human intent
  • Contextual logging: Capturing the business justification, task scope, and specific data accessed for every transaction
  • Tamper-evident audit trails: Securing logs with cryptographic signing to support non-repudiation and detect any unauthorized modifications

Identity controls can also enforce ethical boundaries. An agent registered for fraud detection cannot access data for secondary purposes without explicit re-authorization. An agent authorized to read intake forms cannot modify treatment records. Scope enforcement operates at the API level.

Finance

Agents access transaction data, execute trades, and modify accounts. The core security challenge is preventing unauthorized or fraudulent transactions.

Securing financial agents requires authentication at every transaction, explicit authorization per transaction type (read-only for monitoring, write access only for specific queue updates, escalation access only to authorized reviewers), behavioral anomaly detection (impossible travel, unusual transaction types, accessing new data sources), and segregation of duties via identity controls (no single agent has end-to-end transaction authority).

Healthcare

Agents process PHI under HIPAA constraints. Access must be limited to what is necessary and must be auditable.

Dynamic scoping via FGA enforces data element-level boundaries. An agent reviewing intake might need patient demographics (not insurance), chief complaint and symptom history (not psychiatric history), medication history (not genetic testing), and vital signs (not mental health assessments), depending on organizational policy and regulatory interpretation.

Healthcare organizations must manage:

  • Audit trail completeness: Every PHI access is logged with agent identity, patient record, specific fields, timestamp, and justification
  • Data residency and regional compliance: Agents and data respect applicable regulatory boundaries, such as HIPAA and GDPR
  • Patient rights and data minimization: Agents receive only the data required for the current task, not broad historical access
  • De-identification and consent: Agents processing identified PHI are segregated from research datasets

Ensuring compliance in the era of AI autonomy

Regulated industries need a robust identity foundation to deploy AI agents safely. Agents should be treated as NHIs subject to the same level of governance as human users.

This means registering and discovering agents, authenticating agents with short-lived credentials, dynamically scoping access via FGA, maintaining complete audit trails, implementing human oversight for high-risk decisions, and detecting behavioral anomalies.

Modern identity platforms can provide these capabilities and align with the NIST Zero Trust Architecture and NIST AI RMF, supporting compliance with HIPAA, GDPR, Dodd-Frank, and the EU AI Act.

Securing AI agents is essential, comparable in importance to encryption for data at rest or to network segmentation. Organizations that embed Zero Trust and identity governance into AI development from inception can scale autonomous deployments focused on their long-term safety and compliance posture.

Frequently asked questions

What is an AI agent in the context of security?

An AI agent can be modeled as a non-human identity (NHI) when it is granted credentials and acts autonomously within systems. Governing agents as NHIs requires authenticated identity, explicit authorization boundaries, and continuous audit logging.

What is shadow AI?

Shadow AI occurs when agents operate without centralized NHI registration, thereby bypassing identity controls and creating unmonitored data egress points.

How does Zero Trust apply to AI agents?

Verifies three things at every action: the agent’s identity (cryptographic proof), human authorization context (was the user legitimate?), and action intent (does the action align with authorized scope?).

Which compliance standards most immediately affect AI agents?

HIPAA, GDPR, EU AI Act, and financial regulations (Dodd-Frank, SEC market surveillance rules). All require audit trails, attribution, and explicit authorization controls enforced through identity governance.

What is the difference between securing AI agents and securing human users?

Authentication mechanism shifts from interactive (passwords, MFA) to non-interactive (tokens, certificates, workload identity federation). The underlying principle remains the same: verifiable identity, explicit authorization, and complete audit trails.

Secure AI agents with compliance in mind

Modern identity platforms can extend governance to non-human identities through workload identity federation, fine-grained authorization, and audit logging with tamper detection. The Okta Platform provides these capabilities to help organizations safely and compliantly scale AI deployments in regulated industries.

Learn more

These materials are for general informational purposes only and do not constitute legal, privacy, security, compliance, or business advice.

The content may not reflect the most current security, legal and/or privacy developments. You are solely responsible for obtaining advice from your own legal and/or professional advisor and should not rely on these materials.

Okta makes no representations or warranties regarding this content and is not liable for any loss or damages resulting from your implementation of these recommendations. Information on Okta’s contractual assurances to its customers may be found at okta.com/agreements.

Continue your Identity journey