From Principles to Practice: Implementing AI Ethics in Organizations

About the Author

17 February 2026 Time to read: ~

There is a new frontier of cybersecurity—a challenge few are talking about. Forty-five billion non-human identities, or AI agents, are expected by the end of this year. As an identity security company, Okta's unique role in ethical AI is to secure AI agents, enabling safety and privacy. Adopting ethical AI into our engineering practice isn't an exception; it’s integral to our strategy. Building on our approach shared in Responsible AI Innovation, we have found that translating ethical AI principles into practice effectively requires four key elements:

  1. Aligning principles with business, vision, and values to gain buy-in.
  2. Establishing governance to provide human oversight.
  3. Embedding principles into each function to make them relevant and practical.
  4. Supporting the broader industry to increase impact and be inclusive.

1. Aligning with Business, Vision, and Values

Okta's core business is providing identity management. We build software for IT teams to help their workforces securely access the applications they need, and for developers to build identity solutions for their customers in healthcare, finance, and other industries. Our platforms secure all types of identity from AI agents to customers, employees, and partners. Since identity is the primary entry point for all digital interactions, securing it enables respect for human rights, including privacy, safety, and freedom from discrimination.

Ethical AI aligns directly with Okta’s core:

  • Business: Our business is to secure identities. This now includes AI agents — the next big cyber threat.
  • Vision: Our vision is to "free everyone to safely use any technology." Securing AI agents is a natural extension of this.
  • Values: Our Responsible AI Principles are directly mapped to our company values:
    • Love Our Customers: We are transparent with customers about our AI capabilities and how to configure them, empowering them to use our tools safely.
    • Always Secure. Always On: We focus on security, privacy, and safety by design.
    • Build and Own It: We hold ourselves accountable for our work.
    • Drive What’s Next: We innovate responsibly.

2. Establishing Governance and Paved Paths 

Okta formed a cross-functional AI Governance Team to oversee our Responsible AI strategy. Effective governance is often misunderstood as friction that hinders progress, but in reality, it accelerates innovation while enabling trust rides shotgun. AI tools pre-approved by this team make responsible AI adoption faster and enable the implementation of our Responsible AI Principles. Additionally, this provides a process for employees to request reviews for adopting new AI tools safely and securely. Specific applied AI use cases are reviewed thoroughly by the governance team, to verify our contractual obligations are being upheld. Taking it further, our Engineering team has set up paved paths for AI tooling adoption. These ‘paved paths’ are vetted by governance teams and have privacy, security, and compliance guardrails baked in. This approach has enabled engineers to focus on building innovative capabilities, avoiding compliance ambiguity and friction.

We recognize that employees need a space to experiment and innovate. To enable exploration, we've created structured experimentation zones, such as dedicated AI sandboxes during internal hackathons. These controlled environments allow for necessary oversight while building employee confidence and providing opportunities for them to learn and upskill.

3. Embedding Principles in Each Function

To apply ethical principles in practice, they must be tailored to the daily work of each function. This means translating abstract principles into simple, relevant guidance for the decisions each team makes.

For example, at Okta:

  • Our AI Strategy team is offering company-wide AI days that include ethical AI content, such as the Human-AI Partnership.
  • Our Design team established specific AI design principles.
  • Our Security team developed Okta’s Responsible AI Usage Guidelines.
  • Our Social and Environmental Impact team created team norms and best practices for AI, and created a Sustainability Guide to AI Usage with the engineering team for all Okta employees.
  • Our Engineering team organized an internal Engineering AI day at the company to help facilitate the exchange of knowledge on AI efforts with a focus on Responsible AI adoption.
  • Our Engineering team has set up paved paths for AI tooling adoption in line with the AI Governance Team’s recommendations. 
  • Our Talent Learning and Development team provided a required company-wide AI training bootcamp, customized for each function, that included a module on Responsible AI covering topics such as protecting sensitive information, hallucinations in AI assistants, bias in AI assistants, and Okta’s Responsible AI Principles and resources.
  • Each team and region has an AI Change Champion to lead the way on how we learn, experiment, and grow with AI. Okta employees have access to formal company-wide training that includes Responsible AI and function-specific training, and the opportunity to increase their knowledge base about responsible AI adoption. This provides employees new skills and experience to help them be successful at Okta and in their careers.

4. Supporting the Industry and Community

Ethical AI is a shared responsibility. We work collaboratively across the industry, academia, and the public sector to advance ethical and responsible use. Okta for Good’s vision is to free everyone to safely use any technology. We do this through grant funding, our identity software platform, and the passion and expertise of our employees. Some examples of how Okta contributes include:

Public Interest Cybersecurity: We partner with UC Berkeley’s Center for Long-Term Cybersecurity (CLTC) to advance research in cyber and AI security, and to improve the cybersecurity of a wide range of public interest organizations. In one initiative, we participated in a Table-Top Exercise conducted at the University of California, Berkeley, examining the rapidly evolving role of generative AI in cybercrime, and subsequently published research in the IEEE journal on the same. In another initiative, recognizing that cyberattacks are increasingly threatening the organizations that provide essential services, we participated in a partnership with the State of Washington’s WaTech and UC Berkeley’s CLTC to help protect the digital front lines of the social sector.

Digital Equity: Through Okta for Good (O4G), we provide free or discounted products and services to nonprofits. This includes access to Identity Threat Protection with Okta AI and financial grants for organizations that provide cyber and AI training to underrepresented talent. 

Establishing Open Standards: We are part of an OpenID Foundation working group to create IPSIE (Interoperability Profile for Secure Identity in the Enterprise), a new security standard for AI agents to securely interact across systems.

Strengthening the Ecosystem: We partner with ISVs to develop a new open protocol, Cross-app Access, that securely manages AI agent interactions.

Sharing Best Practices: We signal our commitment to the market through our public principles and by asking vendors about their sustainable AI practices. We also share our employee guide on how to use AI more sustainably, encouraging the industry to adopt efficient and responsible practices.

Looking Ahead 

At its core, going from principles to practice should be a continuous operational practice, not a one-and-done project. Whether it's securing a human workforce or 45 billion non-human identities, our mission is the same: to free everyone to safely use any technology. By anchoring our AI strategy this way, we aim to make sure innovation never compromises safety. For Okta, ethical AI is not just about mitigating risk – it's about architecting a system where technology serves humanity safely and reliably in the long term.

About the Author

Alison Colwell

Senior Director of Sustainability & Responsible Technology

Alison Colwell is the Senior Director of Sustainability & Responsible Technology for Okta. She has been at Okta 5+ years. As the first Head of Sustainability, Alison built the team and programs on climate, human rights, responsible AI, reporting and disclosures, and customer engagement. She also serves on Okta’s AI Governance Team and is the strategy lead for Okta’s partnership with the World Economic Forum (WEF). She is very passionate about the interconnections of human rights, racial justice, and climate change.

Alison has worked 20 years on sustainability, ESG, human rights, and responsible tech, including building the program at LVMH brand, Sephora, as a consultant with BSR (Business for Social Responsibility), and with a start-up, Novi Connect, using machine learning to make more sustainable products. She holds an MA in Public Policy and Administration from Carleton University, with a Bachelor of Commerce, and a Bachelor of Arts in Global Development Studies from Queen’s University, Canada. Alison volunteers as a Fellow at Stanford’s Center for Human Rights and International Justice, guest lecturing, mentoring students, as well as learning from them, and serves as a Strategic Board member for Business Council on Climate Change (BC3). Alison loves being an “auntie,” surfing, hiking, reading, birdwatching, and dancing.

Vrushali Channapattan

Director of Engineering

Vrushali Channapattan is the Director of Engineering at Okta, where she leads Data and AI initiatives with a strong focus on Responsible AI. In the past two decades, she has shaped large-scale data systems and contributed to open source as a Committer for Apache Hadoop. Before Okta, she spent nearly a decade at Twitter, helping drive its growth from startup to public company.

She has shared her expertise at global conferences including IEEE CAI 2025, IEEE CISOSE, SheTO, Google Next, Hadoop Summit and DataWorks Summit. Vrushali holds a Master’s in Computer Systems Engineering from Northeastern University, is an inventor on patents in AI, identity, and distributed systems, and has published in IEEE journals as well as industry blogs.

Continue your identity journey