Executive Summary

AI agents are approving loans, giving legal advice, triaging patients, and controlling physical systems. When they cause harm, courts ask: can you prove who authorized the agent, what it was permitted to do, and produce the trail? Most enterprises cannot.

The attribution gap is the distance between what an agent did and your ability to prove who authorized it and what it was permitted to do. It maps directly to regulatory text across eight frameworks on both sides of the Atlantic, with the Colorado AI Act taking effect June 30, 2026, and the EU AI Act high-risk requirements following by December 2, 2027.

Six of eight frameworks are already enforceable today: Sarbanes-Oxley, the California Consumer Privacy Act, SEC cybersecurity disclosure rules (Form 8-K, Item 1.05), GDPR, NIS2, and DORA. The Colorado AI Act takes effect June 30, 2026. The EU AI Act high-risk requirements take full effect on December 2, 2027. Courts are closing the escape routes: a federal court allowed a product liability claim against an AI chatbot maker to proceed. A tribunal ruled companies cannot call an AI chatbot a separate entity. Italy fined an AI chatbot maker EUR 5M. Nippon Life sued OpenAI for unauthorized practice of law. 78 AI chatbot bills are pending across 27 US states.

Every agent action must trace back to a real human who can be held accountable. That requires five controls: know which agent acted, limit what it can access, trace the authorization to a named human, verify permissions before data moves, and log everything immutably. The friction of implementing them is measured in milliseconds. The friction of not having them is measured in months, millions, and careers.

The Evidence: Four Cases, One Pattern

1. Nippon Life v. OpenAI (March 2026)

A recent lawsuit alleges that ChatGPT engaged in the unlicensed practice of law by advising a disability claimant to fire her attorney, assisting in drafting legal motions, and citing fictitious case law. As a result of the guidance from ChatGPT, the claimant has been using the tool to draft and file multiple legal documents in a suit seeking to undo a previous legal settlement. Nippon Life is suing for millions in damages from defending against AI-generated filings. Who authorized that agent to give legal advice? No system recorded the answer.

2. Garcia v. Character Technologies (May 2025)

A federal court in Florida allowed a product liability claim to proceed against an AI chatbot maker, rejecting the First Amendment defense. Google, which provided the underlying LLM, could be liable as a component part manufacturer. If an AI chatbot is a product, the deploying company owns every output and needs a trail: which agent, which authorization, which human, which scope.

3. Moffatt v. Air Canada (February 2024)

Air Canada's AI-powered chatbot told a grieving passenger he could apply for a bereavement discount retroactively. He could not. The company argued the AI chatbot was a separate legal entity. The British Columbia tribunal rejected this:

While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website.

4. Italy v. Replika / Luka Inc. (2025)

Italy's data protection authority fined Replika's parent company EUR 5 million under GDPR. Three violations: failing to identify the lawful basis for processing user data, providing an inadequate privacy policy, and deploying no age verification. Three gaps that proper controls would have closed.

These four cases are not outliers. Kentucky became the first state to sue an AI chatbot company. The Netherlands fined Clearview AI EUR 30.5 million. As of February 2026, 78 AI chatbot bills have been filed across 27 US states. NYC's MyCity AI chatbot told business owners to break the law.

The pattern is the same in every case. An agent acts. A human is harmed. The enterprise cannot produce a trail showing who authorized the agent, what it was permitted to do, or how to attribute its actions to a responsible human. Courts are establishing that you are liable. Regulations are specifying the controls you need. That space is the attribution gap.

Informational graphic explains the attribution gap between agent actions and provable evidence in compliance. The attribution gap. The left side represents what agents do. The right side represents what enterprises need to prove. The chasm between them is where regulatory, legal, and operational risk lives.

What the Regulations Actually Require

Four questions determine regulatory exposure: Which agents are running? What access do they have? Who authorized that access? Can you revoke it right now? Every regulation below requires at least one of these controls.

1. Which agents are running? (Identity)

EU AI Act, Article 9. Fine: up to EUR 35M or 7% of global turnover for prohibited practices; EUR 15M or 3% for high-risk violations. Full enforcement: no later than December 2, 2027.

A risk management system shall be established...in relation to high-risk AI systems...requiring regular systematic review and updating.

You cannot risk-manage an agent you cannot identify. Article 15 requires resilience against exploitation of system vulnerabilities. An agent on a shared service account is a lateral movement vector.

SOX / COSO GenAI Guidance (Feb 23, 2026). Sections 302, 404, and 906 of the Sarbanes-Oxley Act require the CEO and CFO to personally certify internal controls over financial reporting. An agent on a shared service account with ERP access that posts journal entries could be a material weakness. Access controls now apply to agents, not just employees.

2. What access do they have? (Scoping)

GDPR, Article 32. Fine: up to EUR 20M or 4%. GDPR's requirement for appropriate technical and organizational measures applies directly to AI agents that retrieve or process personal data on behalf of users. An agent with broad access permissions retrieving data where the requesting user lacks access permissions is unauthorized processing. The fix is checking that both the agent and the user have access before data moves. Four critical vulnerabilities (CVSS 9.3-9.4) hit Anthropic, Microsoft, ServiceNow, and Salesforce in 2025 with exactly this pattern.

CCPA §1798.150. $100 to $750 per consumer per incident. An agent operating at machine speed can over-retrieve personal data across tens of thousands of records in minutes. That speed and breadth are what make the per-consumer statutory damages so dangerous at scale. CCPA provides a private right of action: consumers can sue directly. 40,000 records at statutory minimum: $4 million.

3. Who authorized that access? (Attribution)

EU AI Act, Article 12. Record-keeping.

High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system.

That word technically matters. A workflow log tells you what happened. An audit trail tells you who, with what authority, and when it expired.

SOX / COSO Principle 8 (fraud risk assessment):

These risks can be exacerbated or accelerated through the use of AI agents that introduce authorization risks, excessive agency, and insecure interfaces.

As we detailed in part four of this series, recursive delegation creates attack surfaces where permissions fail to narrow at each hop. Shadow AI accelerates the attribution gap. Line-of-business teams plug agents into SAP, Oracle, and Workday on generic service accounts with no unique identity and no delegation chain. When something breaks, no one can trace the action to a responsible human. Enterprises are already hitting this wall. A European gambling platform asked whether its IGA tool could recertify agent identities for SOX compliance. A top-three US bank asked who is liable when agents delegate to other agents. Both questions lead back to the same gap: no identity, no authorization trail, no attribution.

SEC Cybersecurity Disclosure Rules (Form 8-K, Item 1.05, effective December 2023). Material cybersecurity incidents must be disclosed within four business days of determining materiality, describing the nature, scope, and impact. If the agent had no identity and left no delegation chain, the disclosure is incomplete. That is its own enforcement risk.

4. Can you revoke them right now? (Human Oversight)

EU AI Act, Article 14.

to intervene on the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure.

A long-lived token that cannot be revoked mid-task is not a stop button. It is a request. Real intervention is the ability to pause before issuance, revoke in flight, and refuse a revoked credential at the enforcement point. Three capabilities. Most enterprises have zero.

The Accountability Problem

Agents are not legal persons. They cannot be deposed or held in contempt. They cannot testify to intent. And unlike a human employee who can be interviewed six months later, an AI agent is ephemeral. It spins up, acts, and shuts down. If no identity was assigned and no trail was captured while it was running, the evidence is gone permanently.

Air Canada tested this in court. Its AI-powered chatbot gave wrong bereavement fare advice. The company argued it was a separate legal entity. The tribunal rejected that argument. But the ruling exposed a deeper problem. No unique identity. No authorization record. No trail connecting the chatbot's output to a human. Air Canada could not prove who configured the agent, what policies governed it, or who was accountable. That is the accountability gap that identity and authorization close.

Enterprises deploy these ephemeral agents as digital workers: rebooking passengers, adjusting insurance, triaging intake, and approving credit. As we explored in part five of this series, some control physical systems, such as HVAC, facility doors, and dosage recommendations, are incorporating AI into those decision streams. These are decisions with physical consequences.

When an agent action is challenged, the legal system needs a human on the other end. Identity tells you which agent acted. Authorization tells you what it was permitted to do. Without both, the digital workforce can cause harm but cannot answer for it.

The Friction Fallacy

In nearly every conversation about agent security, the same objection surfaces: We do not want to add friction. We do not want to slow down AI adoption.

This is the wrong framing. The question is what happens when an agent without these controls makes headlines.

An agent exfiltrating corporate data. An agent exposing PHI. An agent giving wrong legal advice to thousands. An agent adjusting medication dosages from records it was never authorized to access. Every one of these is one control layer away from being contained.

Air Canada's AI chatbot gave one wrong answer to one passenger, and it became a landmark AI liability ruling. Those were simple AI chatbots. The AI agents enterprises deploy today have far broader access, far more systems, and far greater speed.

The friction of identity and authorization is measured in milliseconds per token issuance. The friction of a breach, a regulatory investigation, a class action, or an SEC filing is measured in months, millions, and careers. Deploying agents without these controls does not remove friction. It is deferring it to the worst possible moment.

The Clock Is Not Theoretical

Six of the eight frameworks are already enforceable. GDPR, SOX, CCPA, NIS2, DORA, and SEC cybersecurity disclosure rules are collecting penalties today. The Colorado AI Act takes effect June 30. It is the first comprehensive US state AI law. A proposed replacement framework (March 2026) would shift requirements from impact assessments to transparency, record-keeping, and consumer rights, with exclusive enforcement by the state AG. Either way, the law requires controls that trace back to identity and authorization. The EU AI Act follows no later than December 2, 2027, and conformity assessments take six to twelve months. Organizations need to start preparing now. COSO published its AI internal controls guidance on February 23, 2026. PCAOB inspection plans explicitly include GenAI.

The attribution gap is not a compliance checkbox. Companies will be held accountable whether or not they put the controls in place. The question is whether you can demonstrate accountability when regulators and courts come asking.

Identity and authorization help close the gap. Colorado takes effect June 30. The EU AI Act high-risk requirements must follow no later than December 2, 2027. The window is closing.

The controls to close this gap are well understood and available today. Learn how Okta and Auth0 secure AI agents at okta.com/ai and auth0.com/ai

Continue your Identity journey