Organisations across Asia Pacific and Japan (APJ) are accelerating their use of AI, rapidly integrating non-human identities (NHIs) into their critical workflows. However, the latest findings from Okta’s Oktane on the Road AI security poll reveal a critical gap: While AI adoption is rising fast, governance, accountability, and identity controls are not keeping pace.
The data, collected from Australia, Singapore, and Japan, points to a security landscape quickly shifting from protecting human access to securing an explosion of AI agents, bots, and service accounts.
Unclear accountability and shadow AI present new risks
This shift in security priorities is further complicated by confusion over who is responsible for AI-related risks:
In Australia, 41% of organisations report that no single owner manages AI security risk.
This trend is mirrored in Singapore (25%) and Japan (29%), where accountability remains unclear for at least a quarter of respondents.
This fragmented ownership is fueling the growth of shadow AI, now the region’s biggest blind spot. In both Australia (35%) and Singapore (33%), shadow AI — the use of unapproved or unsupervised AI tools — is cited as the top security concern. Japan highlights data leakage (36%) as its primary risk, followed closely by unapproved agents.
The lack of visibility into AI agent behavior further compounds the issue. Fewer than a third of organisations in the region feel confident in their ability to detect an AI agent acting outside its intended scope. Just 18% in Australia and 31% in Singapore express confidence, a figure that drops to a low of 8% in Japan. As AI agents become autonomous digital workers, this confidence gap represents a substantial and growing operational risk.
Identity systems are unprepared for the AI workforce
The most significant challenge highlighted by the poll is the unpreparedness of existing identity and access management (IAM) systems.
Across all three countries, fewer than 10% of organisations say their IAM systems are fully equipped to secure AI agents, bots, and service accounts. The majority describe their systems as only partially ready, signaling that the next frontier of identity security is here: managing and securing non-human identities.
This challenge is rising to the highest levels of the organisation, though engagement varies:
Australia: 70% of boards are aware of AI-related risks, but only 28% are fully engaged
Singapore: 50% awareness, with 31% full engagement.
Japan: Shows the highest level of awareness at 78%, with 43% of boards fully engaged. This is driven by tightening regulatory expectations and a cultural focus on data integrity.
The path forward: Build trust through identity
The Okta poll demonstrates that while APJ's AI momentum is strong, there is a clear need for trust, governance, and identity to advance alongside it.
As AI becomes deeply embedded in mission-critical workflows, organisations must shift their focus. They will need to secure not just human employees, but every system, every integration, and, critically, every AI agent acting on their behalf.
Identity is where that assurance begins. When identity controls are robust and designed to manage both human and non-human workers with the same rigor, trust follows. This foundation is essential for AI to become a sustainable, strategic, and secure advantage for the region.
Methodology
All results are from live, interactive polls conducted during the Oktane on the Road event series in Sydney, Melbourne, Tokyo, and Singapore in October and November 2025. The polls, administered using the Slido platform, targeted Okta customers and partners, specifically senior IT and cybersecurity professionals attending the events. The total number of respondents was 435.
The poll consisted of five questions to gauge executive sentiment on the use of AI. The questions were designed to measure where customers are and how they are evolving on their AI journey. The collected data was used to generate country-specific insights.