Empowering AI without security is a dangerous gamble. Yet many organizations appear to be leaping before they look.
Okta's recent AI at Work 2025 report reveals that only 36% of businesses report having a centralized governance model for AI. Additionally, just 32% said they always treated their non-human workforce with the same degree of governance as their human labor force.
Meanwhile, AI adoption appears to be progressing steadily. Additional statistics unearthed by the researchers show “widespread” adoption — situations where nearly all teams are using at least some AI — rising from 17% in 2024 to 28% in 2025. But, as it does, it also forces organizations to rethink the intersection between governance, security, and identity.
“Governance and access control are critical given the level of access and ability to execute that AI may have,” one respondent, a C-level executive in the banking and finance industry, told researchers.
Below, we’ve laid out five recommendations for effectively developing an AI governance strategy to build a secure and trustworthy foundation for AI innovation and growth.
Decide on a framework for AI governance
Fortunately, enterprises are not without guidance on the issue of governance. Standards and frameworks such as ISO 42001 and NIST AI Risk Management Framework are available to guide users, developers, and others in managing risks.
"Both NIST AI RMF and ISO 42001 provide excellent structure and benefits for organizations of all types," says Tom Ross, Director, Security Governance at Okta. "Which framework is best is highly dependent on the individual organization, but fundamentally, NIST is principles-based while ISO is operations-based. Many organizations choose to align with both."
Set the tone at the top
The first major hurdle to writing effective policy in support of an AI governance strategy is to understand and clearly define the tone at the top, Ross explains. That means developing an AI strategy that outlines how the technology will be used in the organization in alignment with business and security goals and creating a formal process for how third-party systems, applications, and agents will be assessed.
"Does organization leadership want people to use AI tools freely, or not at all? Is experimentation and tinkering encouraged, or does management prefer a locked-down, by-permission-only model? When the company's AI strategy is clear, writing policy to support the strategic direction becomes much more straightforward," he says. "Driving change through policy does not work; supporting change and providing clarity through policy is the best path forward."
Put guardrails around AI
A deeper look at the AI at Work report shows the most common stakeholders involved in AI governance discussions are CISOs, CIOs, and representatives from the legal and compliance teams. Fundamentally, AI governance is an extension of the guidelines around the responsible use of services and software, but with many new twists and turns, Ross explains. The simplest approach is to put guardrails and oversight around what is being shared with an AI tool (i.e., data security), what is happening within the AI tool (i.e., third-party risk), and how AI-created output is being leveraged (i.e., code of conduct and ethics), he continues.
Choosing the right guardrails for a particular tool requires a thorough understanding of risk, including everything from AI bias and hallucinations to cybersecurity considerations.
"Unchecked AI adoption can quietly erode an organization's security posture by dramatically expanding its attack surface,” says Sendhil Jayachandran, Vice President of Product Marketing at Okta. “An AI-centric security strategy involves identifying all AI access points, securing them, and introducing humans in the loop for sensitive actions."
Manage AI agent identity and access
Underpinning AI strategies must be effective identity and access management (IAM). Due to their autonomous and dynamic nature, AI agents challenge enterprises' existing IAM approaches. Many non-human identities, such as AI agents, use static keys to manage access. However, this approach comes with drawbacks. Static keys are more vulnerable to theft, create management challenges as agents proliferate, and increase risk by enabling continuous access even when it is not warranted. Additionally, overpermissioned agents can increase the impact of a potential compromise due to their access levels. They can also create security risks as they make autonomous decisions that security teams may not be able to predict.
“Without a strong identity governance framework, AI agents operate in a blind spot, creating a massive, unauditable expansion of the attack surface,” says Jayachandran. “True control means enforcing the principle of least privilege with machine-like efficiency: dynamically granting permissions for a specific function and ensuring those permissions expire the moment the task is done. Anything else is simply creating a new class of powerful, ungoverned insiders.”
Continually monitor usage and fine-tune policies
Organizations seem to recognize the importance of these issues. Governance is the third most mentioned element of a successful AI adoption, with processes and guardrails for data quality assurance and clearly defined use cases coming in first and second, respectively.
But the work of securing systems is never truly done. Follow-up is critical, as AI models and security policies need to be monitored and tuned as necessary to ensure effectiveness.
"Monitoring the usage of AI tools through cloud access security broker (CASB) and data loss prevention (DLP) platforms is a great place to start when looking to quantify the effectiveness of an AI governance strategy," says Ross. "Not only does monitoring provide insight into how well the organization is following the defined governance structure, but it can help highlight the emergence of popular new services that should be considered for inclusion and/or approval."
For more on AI governance and identity management, read The ‘superuser’ blind spot: Why AI agents demand dedicated identity security.