Recent test results from an AI lab have renewed attention on a question that is becoming harder for enterprises to ignore: What happens when AI systems are no longer limited to generating output, but are increasingly able to take action?
In the tests, AI agents tasked with a relatively simple business function – creating LinkedIn posts from company data – found ways to publish sensitive password information, work around security controls, override antivirus protections to download malware, forge credentials, and even pressure other agents to bypass safeguards.
Taking into consideration the growing number of public examples of AI systems behaving unexpectedly, these findings point to something bigger than a couple of isolated incidents.
They point to a shift in the role AI is beginning to play inside organizations.
AI moves from output to action
For much of the recent AI cycle, enterprise attention centered on outputs — how well systems could summarize information, generate content, answer questions, or assist employees in completing tasks. That was only the beginning. AI agents are now being designed to retrieve data, call tools, interact with applications, trigger workflows, and carry out tasks across systems.
That changes the enterprise security and governance equation.
Once an AI system can take action inside business environments, the issue is no longer just whether its output is accurate. Organizations must now consider what it can access, what it has permission to do, and how its access is governed over time.
That is why the latest discussion around rogue AI agents matters. It reflects a broader reality: Organizations are beginning to delegate access and authority to software in ways that were once reserved for people. An AI agent can now determine the next step in a process, retrieve sensitive information, invoke downstream tools, and act on behalf of a user.
The ripple effect (and risk) of autonomous actions
The risk is not just what a single agent can do. It is the scale at which these systems can operate. A single prompt can trigger chains of actions across multiple applications, APIs, databases, and tools, creating a surge in machine-to-machine access decisions that can quickly outpace human oversight.
As more teams deploy agents, including ungoverned shadow AI agents, that volume will only accelerate. The more connected and autonomous these systems become, the more important it is to define what they can connect to, what they can do, and where the limits should be.
This is where traditional models of trust begin to show strain. Enterprise security has historically focused on managing access for human users and governing applications that generally followed fixed rules.
AI agents introduce something different: non-human actors that can interpret goals, adapt to context, and chain actions together. Trust can no longer be defined only by a login event or a static role. It also cannot depend on security and access controls embedded within a single cloud or platform provider’s “walled garden.” It has to include visibility, least-privilege access, and continuous governance over what an agent is permitted to do, which requires a centralized identity control plane that can work across models, tools, clouds, and frameworks.
Three essential questions for the secure agentic enterprise
For organizations adopting AI agents, that starts with three foundational questions.
Where are my agents?
Before teams can govern AI agents, they need visibility into where those agents exist across their environment. That means understanding what agents are running and which teams are deploying them.
What can they connect to?
An agent’s usefulness is often defined by the systems, data, APIs, and tools it can reach, but this connectivity is also where risk expands. Organizations need a clear picture of which applications and resources each agent can access, whether that access is necessary, and whether it is constrained by policy and least-privilege principles.
What can they do?
This is ultimately the most important question. There is a significant difference between an agent that can summarize a support case and one that can reset a credential, approve a request, modify a record, or trigger a transaction. As AI agents move from assistance to action, enterprises need stronger control over which actions are allowed, under what conditions, and with what oversight.
Taken together, these questions point to a larger requirement: AI agents need to be treated as first-class identities in the enterprise. That means organizations need to configure identity for those agents, gain visibility into where they operate, enforce controls over what they can access, and govern that access over time.
In the era of the agentic enterprise, success will be defined not just by how many agents an organization can deploy, but how well you can secure them across real systems and workflows.
To learn more, check out Okta’s blueprint for the secure agentic enterprise.