Is AI Hesitation slowing innovation in your nonprofit?

many nonprofits are reluctant to get on board with ai at this important early stage

ChatGPT might be the most ground-breaking new technology to appear since the rise of the Internet in the 90s. But nearly a year into the AI revolution, the benefits for nonprofits are still more promise than reality.

It’s not really surprising — the nonprofit sector is usually slower to adopt new technologies than the commercial sector. But the well-publicized concerns about ethics and risks, are an additional hurdle for nonprofit professionals eager to explore the potential of AI technologies.

Two common scenarios seem to be playing out:

  • staff may be using it unofficially and quietly because of concerns about risk, bias, and lack of endorsement. (i.e. “I’ll use it on the side, quietly”)
  • staff may not be using it at all because of concerns about risk, bias, and lack of internal endorsement. (i.e. “I don’t think we’re allowed to use it.”)

Neither of these is really a good way forward. AI Hesitation could lead to missed opportunities for nonprofits and could also hamper the role of the nonprofit sector in helping guide the development of Responsible AI. Are we going to leave it up to the Googles, Microsofts, and Facebooks to decide what kind of AI we have in the future?

Is your organization ready to start using ChatGPT & AI?

AI isn’t a must-have technology yet, and it’s likely best for most organizations to avoid making big investments or commitments for now. But this could be an opportune time to begin exploring the AI landscape on a smaller scale. How can an organization strike a practical balance between risk and opportunity?

One place to start is to get some clarity about where your organization sits on the innovation vs. risk curve.

Here are a few questions to ask:

  • Does your organization have a learn-while-doing culture that is accepting of occasional failure or disappointment?
  • Do you have internal staff with an innovation mindset who can lead on an AI experimentation?
  • Do you have management backing to direct some organizational time and energy into innovation projects that may not have a clear ROI?

Take a cautious approach

Agree on some general parameters to keep exploratory AI work within accepted boundaries — recognizing the reality of innovation risk-taking, while helping keep the compass on track with the organization’s mission.

Here are some possible starting points:

  • Build around your champions.
    There are likely people in your organization who are already using/exploring ChatGPT, and could form the core of a working group — formal or informal, preferably cross-team — to start sharing what works. Getting a few voices together to talk can help clarify the risk vs. opportunity balance.
  • Be cautious, but don’t overthink it.
    The landscape of best-practice AI use policies and ethical red-lines is evolving rapidly, so avoid getting locked in a fixed position. Stay flexible, and lean on your existing policies and guard rails for guidance about what is acceptable/permissible.
  • Humans stay in charge
    This should go without saying, but always ensure there’s human oversight of AI products, whether it’s marketing content, data insights, or document analyses. AI chatbots and text generators are unreliable right now, but they will get better.
  • Stay focused on your mission.
    There’s a lot going on right now in the AI space, and it’s easy to get distracted by the latest shiny announcements (“Now with 3D video!”). Try to stay focused on what can actually help you deliver your programs now.
  • Prepare for disappointment first, then success
    Innovation rarely follows a linear path forward. Expect to hit the ‘trough of disillusionment’ along the way, and avoid putting too many eggs in one basket. Try to manage expectations — and risks.

These ground rules can help create a predictable, supportive environment for staff to begin to push the envelope.

Do you need an organization-wide AI mandate?

Overcoming AI Hesitation may still require a more formal directive that doesn’t just remove roadblocks, but also gives tangible mission and purpose to AI exploration. Internal champions still have to justify devoting some of their time and resources into AI projects that may not be in official plans or strategies.

It may be helpful to consider adopting an organizational ‘AI Mandate’ that formally recognizes the opportunity presented by new AI technologies and intentionally directs staff to begin explorations, within reasonable boundaries.

An AI Mandate could be a simple single-paragraph statement or a more complex document. Its goal is to empower staff to move ahead with learning and experimentation, understanding that the organization is comfortable with the uncertainty and risks.

— — —

Here is a sample AI Mandate statement –generated by ChatGPT. — that could be a starting point for your organization’s needs.

AI Mandate Template

AI for Good: Nonprofit Organization Mandate

Mission Statement: Our nonprofit organization is dedicated to harnessing the power of Artificial Intelligence (AI) for positive social and environmental impact while prioritizing careful risk assessment and mitigation. We will seek to use AI technologies to further our mission, promote ethical AI practices, and ensure that the benefits of AI are accessible to all while actively preventing potential negative consequences.

Core Values:

  1. Innovation: We will continuously explore and promote innovative AI solutions, while rigorously evaluating potential risks and challenges and actively seeking to mitigate them.
  2. Ethical and Responsible AI: We will adhere to the highest standards of ethical AI development and deployment, emphasizing the proactive identification and mitigation of biases, discrimination, and potential harm.
  3. Collaboration: We believe in the power of partnerships and collaboration, not only to amplify our impact but also to collectively assess and manage the risks associated with AI projects.
  4. Transparency: We will maintain transparency in all our activities, including risk assessment and management, from project selection and funding allocation to AI development and data usage. We will be open and accountable to our stakeholders.

With this mandate, our nonprofit organization is committed to investigating the potential of AI and striving to further our mission through the responsible and impactful use of artificial intelligence, while taking a proactive approach to risk assessment and mitigation to prevent potential negative consequences.



Harnessing Artificial Intelligence for Nonprofit and Charity Fundraising

Discover how AI can boost your fundraising results.