Agentic systems are applications in which autonomous agents plan, reason, and act across multiple services. These applications are designed to move beyond simple request/response interactions.
Unlike traditional API-driven microservices, which depend on request payload schema and interpret requests syntactically, agentic applications use Large Language Models (LLMs) to interpret requests semantically. This enables them to plan multi-step tasks, chain actions together, and invoke multiple services autonomously on behalf of a user.
As these systems mature, they increasingly resemble distributed applications composed of loosely-coupled agents, services, and external APIs working in concert. With this added flexibility comes greater complexity and the need for a consistent, secure approach to identity.
Security issues posed by agentic systems
A central challenge in these environments is identity propagation, ensuring that a user's identity and intent can be securely carried across agents, services, and tools. Without strong authentication and authorization, an agent might invoke a service without the right context, overstep its privileges, or bypass enterprise policies.
Okta Cross-App Access provides a standards-based way to exchange identity tokens between applications. Combined with the Model Context Protocol (MCP), which gives agents a consistent way to discover and call tools, these capabilities create a trusted fabric that ties users, agents, and services together.
The following sections illustrate a sample architecture for integrating Okta Cross-App Access with AI agents running on Amazon Web Services (AWS) serverless services, like AWS Lambda or Amazon Elastic Container Service, to solve these challenges. You'll see how a user authenticates with Okta, how their identity is propagated through agents via token exchange, and how downstream services validate and enforce access.
By combining Okta's enterprise identity platform with AWS's serverless and container services, organizations can enable secure agent-to-agent and agent-to-service collaboration at scale.
Solution architecture
This section outlines a sample end-to-end design that ties Okta identity to agent-to-service calls on AWS, with MCP providing a consistent protocol for agents to discover and invoke tools. You'll see the major components, the token flow, and how identity is propagated and enforced at each hop.

Authentication and authorization flows
The sample architecture begins with a user interacting with an agentic client application, such as a web or desktop interface. The client integrates with Okta using OpenID Connect (OIDC) to handle authentication. Once signed in, the client receives an ID token that represents the user's identity. Rather than using that token directly for downstream calls, the client leverages the Cross-App Access (XAA) SDK to exchange it for an Identity Assertion Grant (ID-JAG), which is bound to a specific downstream audience. This establishes the foundation for secure identity propagation across applications.

The ID-JAG is then presented to an authorization service running on AWS, as illustrated in the earlier diagram. This service verifies the assertion and issues a short-lived, audience-restricted access token. That token is scoped specifically for the resource the agent needs to call. By introducing this step, you can enforce clear trust boundaries and ensure least-privilege access to downstream services.
With the access token in hand, the agent can call tools and resources hosted on MCP servers. Each MCP server validates the incoming token before performing any action. These tools and resources may, in turn, call downstream protected resources, such as service APIs. In case these services run on AWS, they can enforce their own policies, relying on IAM roles and validating token claims such as user identity, scope, and tenant. This layered enforcement ensures that identity, intent, and authorization are carried consistently from the user all the way through to the backend systems.
Considerations and best practices
When building secure agentic applications, engineering teams should adopt stringent security controls and proven best practices. The following principles provide a strong foundation for securely building agentic systems and MCP servers.
Identity validation
Validate identity at every boundary. Each AI agent and MCP server must independently verify access tokens, checking issuer, audience, scope, signature, and expiration. Never rely on upstream validation alone; this ensures agents cannot bypass enforcement by chaining calls.
Token management
Use short-lived, scoped tokens. Always follow the principle of least-privileged access and scope tokens to the minimum required permissions. Access tokens issued for agents should be valid for a short period only (minutes, not hours) and tied to specific MCP tools (e.g., documents.read vs. documents.write). If an agent needs to invoke another tool, perform a token exchange to issue a new token with a narrower scope.
Context propagation
Propagate user context securely. Use Okta Cross‑App Access to transport identity across services. When agents invoke MCP servers, include claims such as subject, end-user (sub), authorized party(azp), audience (aud), and act to track on-behalf-of execution. This ensures downstream services can enforce policies knowing both who initiated the request and why it was made.
Observability and auditing
Audit and trace agent chains end-to-end. Log structured events and auth decisions at each segment, showing full actor trace (for example: user → agent → MCP tool), scopes used, and decisions made. Use observability services such as Amazon CloudWatch and AWS X-Ray to trace requests across multiple agents and tools, to answer questions like: "Who asked the agent to act?” What tool was invoked? What data was accessed?
AWS-specific best practices
- IAM roles: Use the minimum required permissions for Lambda function execution roles. Avoid reusing execution roles across multiple functions.
- API gateway: Use Lambda authorizers to protect your APIs. Enable CORS, rate limiting, and WAF. Consider mTLS when communicating across system components.
- CloudWatch: Use for centralized logging with retention policies, operational and business metrics, and end-to-end tracing.
- VPC configuration: Consider using VPC for additional network isolation if needed.
Agentic applications open the door to new ways of automating tasks and connecting services, but they also introduce new security considerations. Every agent and MCP server must operate with the right user context, enforce least privilege, and leave a clear audit trail. Without strong foundations in authentication, authorization, and identity propagation, these systems can quickly become unmanageable or unsafe.
By combining Okta's Cross-App Access and identity solutions with AWS's serverless and container services, organizations can build agentic systems that scale securely. The approach outlined in this post demonstrates how to authenticate users, propagate their identity through agents, issue scoped tokens for MCP tools, and enforce policy at every boundary.
Deploying your AI agents using AWS’s serverless technologies provides several advantages:
- Scalability and reliability: Your workloads are automatically distributed across multiple availability zones and scale autonomously based on traffic patterns.
- Cost optimization: Pay-per-use pricing model; you only pay for what you use.
- Managed infrastructure: AWS fully handles infrastructure management for you, including operations, security, and resilience.
- Integration: Native integrations with a multitude of AWS services.
- Built-in observability: Comprehensive monitoring with CloudWatch and X-Ray.
To learn more, explore the Okta Cross-App Access documentation and AWS serverless services.
Resources
- Okta Cross-App Access
- MCP Specification
- AWS Lambda Documentation
- AWS SAM Documentation
- API Gateway Documentation