Everyone uses AI, you too
AI Agents are software components based on LLMs that make decisions. They plan, execute, and iterate across multiple applications in your enterprise. They combine human and non-human characteristics: On the one hand, they are not human; their authentication and authorization mechanisms utilize service accounts, passwords, API tokens, and OAuth tokens. On the other hand, they are non-deterministic, they can be fooled, and they are not predictable.
These days, it feels like you must utilize AI quickly. Your competition might be using it to accelerate their work; it may help you be more efficient in your current job. It's a good guess that YOUR leadership has announced a major AI project that will drive your organization's AI adoption.
The problem? The infrastructure moves faster than the Identity and Security infrastructure. The pressure to adopt AI is faster than the efforts to mitigate its risks. Unauthorized use of AI may cause security and compliance issues, exposing sensitive corporate resources to external, unauthorized models or workloads. Most of these risks are well-known risks, but their intensity and prioritization have changed.
Risk 1 - you lose control
Innovative employees rush to use AI, backed by the organization’s leadership. At the same time, IAM and Security teams can’t track who uses what. New applications are onboarded to the enterprise environment, new app-to-app connections are created, and the teams responsible for managing sensitive information are excluded and unable to fulfill their responsibilities.
This “invisibility” can conceal a connection between the same LLM and both a sensitive financial resource and an external server. Because it’s based on an LLM, an attacker from an external server can trick the agent into leaking data from a sensitive resource.
*Image generated with Gemini
To mitigate this risk, you need to track the AI tools your employees use, the actions their agents perform, the resources connected to those agents, and the permissions granted to them. Learn more about how Okta ISPM can help mitigate these risks.
Risk 2 - delegation of trust
In many cases, agents act on behalf of a user. For example, connecting your ChatGPT to your Notion will enable the chat to access and read your data in Notion, according to your permissions and the scopes you granted to it.
*Image generated with Gemini
Note that in some cases, the end-user is responsible for the consent process. This means that an employee can decide on their own what privileges and access app A has to app B on their behalf. This is a decision the administrator would prefer to have.
This risk can be avoided by blocking user-consent flows within the crown jewels and by managing app-to-app connections. Okta Cross App Access (XAA) solves this exact problem. You can learn more about it here.
Risk 3 - increased usage of long-lived secrets
The AI use cases may lead to more usage of long-lived secrets. This may be necessary to allow function calling and integration with other services. This may occur when downloading an API token to connect to an MCP server, or when copying and pasting credentials to connect an agent builder to another SaaS.
*Image generated with Gemini
Improper handling of these secrets can increase the risk of secret leakage. Infostealers, such as “Shai Hulud,” find plain-text secrets that are improperly handled and steal them. To mitigate this risk: First, manage secrets with PAM solutions like OPA. Also, prefer the use of ephemeral tokens (such as OAuth).
Conclusion
Agentic AI requires our attention to old-new risks: lack of visibility, app-to-app access management, and secrets leakage. Security teams must quickly gain visibility into their organization's Agentic AI posture. Try out Okta ISPM to gain more visibility and insights into both your human and non-human identities.