Due to their automation and ability to analyze large data sets to complete complex tasks, AI agents represent a potentially significant leap forward in enterprise productivity. Realizing those gains, however, requires enabling agents to communicate securely with one another.
Enter Google Cloud and its ecosystem of partners. On May 20, Google Cloud announced updates to its Agent2Agent (A2A) protocol. As part of Okta's partnership with Google Cloud, the two organizations will collaborate to define A2A auth specifications and build SDKs and samples to demonstrate how to remotely authenticate agents via Auth0 and Auth for GenAI.
Developing standards that embed security into agent-to-agent communication is a critical step forward for agentic AI. Leveraging A2A will enable secure communication between agents designed using the protocol, potentially eliminating silos and allowing enterprises to use agentic systems more effectively. Agents can work together to accomplish tasks, regardless of the vendor they were built by, and leverage data from multiple sources across an enterprise.
"Standards foster interoperability," explained Aaron Parecki, Director of Identity Standards at Okta. "Developers can build integrations once and deploy them seamlessly across multiple platforms, rather than spending resources creating bespoke connections for each individual provider. For enterprises, this translates into broader choice, greater flexibility, and reduced vendor lock-in, allowing them to confidently select solutions that best align with their business needs."
Empowering that kind of interoperability requires security. AI agents are new to the digital playing field. They need to be woven into the enterprise identity fabric to enable proper levels of governance, visibility, and control. Standards are a start, but best practices like Zero Trust and least privilege should underpin them. Embedding robust identity and access management into agent-to-agent interactions is essential, Parecki said.
"As AI agents become more integrated into daily enterprise operations, it's critical to prioritize security and adhere to the principle of least privilege from the start," he said. "Just like with people, we need to ensure AI agents are granted only the exact level of access required for their tasks, preventing unnecessary or unintended exposure to sensitive information."
For example, if an employee uses an AI agent to review confidential documents, the agent's access should mirror or be a subset of that employee's permissions. The agent should not have broader, unrestricted access across the organization, he added.
"The agentic AI landscape is rapidly evolving, and when software agents autonomously interact with services and data, the creation of robust standards and protocols is crucial," Parecki said.
Building upon established industry standards takes advantage of years of collaborative effort and collective expertise from leading professionals across the tech industry, he added.
"Starting with the solid foundation of established and proven standards, we can identify the gaps that emerge once these are applied to the new developments of agentic AI and collectively work on standardized solutions," he said.
This posting does not necessarily represent Okta's position, strategies, or opinion.