Generative AI promises to be the long-awaited “extra staff member” every resource-strapped nonprofit needs.
But for a sector that’s the second-most attacked by cybercriminals globally, rushing into AI adoption without first establishing security guardrails risks exposing sensitive data, eroding beneficiary trust, and undermining these organizations’ vital missions. With stakes this high, it’s no surprise that nonprofit sentiment about AI stretches from bright optimism and curiosity to deep caution and skepticism.
To help nonprofits find a secure path forward, the Okta for Good team conducted a focused assessment of their grantees’ tech and security needs and interviewed more than 20 nonprofit leaders, tech funders, and AI experts across our ecosystem. The goal of this research is simple: to provide a roadmap for organizations adopting AI, all in service of building a more secure world.
Here are key insights from the field:
AI’s core value is creating ‘extra staff capacity’
Another set of hands: Nonprofits are using AI to streamline mission-critical tasks such as drafting grant proposals, reconciling expense reports, and managing other repetitive administrative duties. This boost in capacity lets teams focus on what really matters: the people and communities they serve.
AI puts data to work: Organizations say predictive AI is helping them use data to better meet their beneficiaries’ needs. For example, in 2023, the Google.org-backed John Jay College x DataKind initiative used predictive AI to identify at-risk students for early intervention, boosting graduation rates from 54% to 86% in two years. This improved understanding of beneficiary trends allows nonprofits to better tailor their programs.

Security risks are advancing faster than policies can keep up
The danger of hasty AI deployment: Fueled by both internal and external pressure, many nonprofits are feeling rushed to implement AI despite not having adequate security measures, digital systems, or rules for data protection. Hurried adoption creates significant vulnerabilities. The failure to secure these AI systems can result in damaging data breaches, system compromises, and other cyber incidents that carry serious repercussions for the organizations and the communities they support
The need for guidance: The fundamental challenge is that security risks are advancing at the same speed as AI adoption. Providing guidance on the use of generative AI is critical to mitigate the risk of information and data misuse among nonprofit staff.
Recognizing the diverse and evolving nature of AI adoption, our research within the Okta for Good ecosystem found organizations generally fall into four categories, each with specific guidance needs:
AI unaware: No policy or employee usage; staff currently lack awareness of AI.
Ad hoc usage: No organization-wide AI policy; employees use AI individually for efficiency.
Established/intentional usage: Organization-wide AI policies exist; usage is intentional and often guided by external standards, such as TAG’s AI Framework.
Deploying externally: Organizations deploy AI in technology products, using in-house and/or commercial large language models (LLMs).

The rush to adopt is driven by funder pressure and FOMO
What are the key considerations compelling nonprofits to embrace AI now? The motivation stems from two primary pressures:
Funder and donor expectations: The increasing emphasis placed by funders on demonstrable program outcomes is compelling nonprofit organizations to consider and adopt AI technologies to enhance efficiency and maintain parity. This signals that AI integration may soon become a standard expectation.
Fear of falling behind the curve: Driven by a fear of missing out on potential gains and competitive advantages in the evolving tech landscape, many nonprofits feel pressured to adopt AI without fully understanding its necessity or long-term implications.
Financial hurdles are threatening an equitable AI future
Uneven adoption will widen the digital divide: As larger nonprofits use AI to boost efficiency and impact, smaller organizations struggle with the necessary cost, access, and expertise critical to adoption. This scenario only widens existing inequalities in innovation, fundraising, and service delivery across the nonprofit sector
The hurdle of resource allocation: Nonprofits face a difficult trade-off when considering AI: allocating resources for long-term efficiency versus addressing immediate, critical programmatic needs for beneficiaries. This complex decision is compounded by financial constraints. Insufficient technology funding creates major barriers, particularly limiting smaller organizations’ ability to afford the substantial costs of AI tools, training, and skilled staff.
New AI duties require going beyond traditional data privacy
Privacy, well-being, and inclusivity: While data privacy is always a concern, AI introduces new complexities and heightened risks. AI's ability to process vast amounts of sensitive personal information increases vulnerability and worsens the potential impact of security breaches. Responsible AI adoption requires nonprofits to go beyond traditional measures to address AI-specific ethical considerations and regulatory requirements. This includes an obligation to ensure AI fosters beneficiary well-being, actively reduces harm, and remains inclusive. Meeting these ethical demands remains an ongoing challenge for many nonprofits.
A security-first approach for the nonprofit journey ahead
AI is not a cure-all for the technology or social impact challenges that many nonprofits face.
As more organizations use AI tools to further their missions, it’s vital they ground their use of emerging technologies in a pragmatic, security-first approach. Insights gathered from our work in the field reinforce the urgency of this balanced perspective: Thoughtful and secure AI integration to advance digital transformation without introducing preventable risks or getting caught up in the AI hype cycle.
Responsible AI use will continue to evolve rapidly. Ultimately, the success of the nonprofit sector in this new era hinges on its ability to make confident, secure choices. The path ahead requires continued vigilance and the continued sharing of best practices across the ecosystem.
Below are tools and resources from partner organizations that can provide the necessary guidance to navigate these complexities:
+++
Methodology
Okta for Good conducted interviews with more than 20 internal and external industry leaders, including subject matter experts in AI and LLMs, AI practitioners, researchers, nonprofit leaders, tech funders, and corporate social responsibility peers, as well as Okta’s internal technology teams to gain firsthand insights into AI’s impact across our nonprofit ecosystem.
About Okta for Good
Okta’s vision is to free anyone to safely use any technology. To drive that vision forward, we launched the Okta Secure Identity Commitment: our long-term initiative to lead the industry in the fight against Identity attacks. Through Okta for Good, we’re taking action to address critical societal issues that align with our business, strengthening the cybersecurity posture of nonprofits, expanding the field of cyber talent, and ultimately contributing to a more secure world. Learn more here.