Should you be using ChatGPT in your nonprofit?

Balancing ethical concerns vs. productivity gains in this early phase of AI

ChatGPT may be the most ground-breaking new technology to appear since the rise of the internet in the 90s. But at this point it’s more promise than reality for most of the nonprofit sector.

Two common scenarios are playing out for nonprofit staff who are curious to explore how ChatGPT can help them deliver their programs:

  • using it unofficially and quietly because of concerns re: risk, bias, and lack of endorsement
  • not using it at all because of concerns re: risk, bias, and lack of endorsement

Neither of these is really a good way forward and could result in lost opportunities. But not every organisation is prepared for the challenges of early AI innovation.

Here are a few questions to ask:

  • Does your organisation have a supportive ‘learn-while-doing’ culture that is accepting of occasional failure or disappointment?
  • Do you have internal staff with an innovation mindset who can lead on an AI experimentation? 
  • Do you have management support to devote time and energy to innovation projects that may not have a clear ROI?

And some ideas for moving forward with AI/ChatGPT in this tricky, evolving landscape.

  • Start experimenting now. 
    There are likely people in your org who are already using/exploring ChatGPT, so one approach could be forming formal or informal working group together — preferably cross-team — and start sharing what works. Getting a few voices together to share views and experiences can help clarify the risk vs. opportunity dynamic.
  • Be cautious, but don’t overthink it. 
    The landscape of best practices re: AI usage policies and ethical concerns is evolving, so avoid getting locked into any single position. Stay flexible, and lean on your existing procedures and policies for guidance.
  • Humans stay in charge
    This should go without saying, but always ensure there’s human oversight of AI products, whether it’s marketing content, data insights or document analyses. AI programs are notoriusly unreliable right now, but they will get better.
  • Stay focused on your mission.
     There’s a lot going on right now in the AI space, and it’s easy to get distracted by the latest announcements. Try to stay focused on what can actually help you deliver your programs now.
  • Start getting a handle on your data — in all formats
    ChatGPT has value now as an independent tool, but in the bigger picture success with AI will depend on access to your organization’s data — including your supporter/donor database, your website, document libraries, internal email and team chats, plus your social media history and email campaigns. Start collecting details on as many of these sources as possible.
  • Prepare for disappointment first, then success
    Innovation rarely follows a linear path forward. Expect to hit the ‘trough of disillusionment’ along the way, and avoid putting too many eggs in one basket. Try to manage expectations —  and risks.

Crafting an AI Mandate for your nonprofit

One helpful tool for navigating this evolving space may be to set out an AI Mandate statement that acknowledges the risks, safety concerns, and protocols appropriate for AI experimentation while providing direction and endorsement for staff to explore the potential benefits.

Here’s a sample AI Mandate for experimenting with ChatGPT and AI (generated by ChatGPT).

AI Mandate Statement for Nonprofit’s ChatGPT and Generative AI Experimentation

At [Nonprofit Name], we stand at the forefront of innovation, recognizing the transformative potential of Artificial Intelligence (AI) and generative technologies. Our journey into the realm of ChatGPT and other generative AI applications is underpinned by our commitment to explore and experiment responsibly, within the confines of ethical considerations and established safety practices.

We are fully aware that meaningful innovation often requires a willingness to accept calculated risks. As we venture into the realm of AI experimentation, we are guided by the following principles:

  1. Ethical Compass: Our commitment to ethical AI guides every step of our experimentation journey. We acknowledge the responsibilities that come with harnessing AI’s capabilities, ensuring that our initiatives are aligned with our values and the well-being of our stakeholders.
  2. User-Centric Approach: Our focus remains unwavering on our users’ needs and experiences. Through AI experimentation, we aspire to enhance user interactions, all while respecting individual privacy and fostering inclusivity.
  3. Transparency and Learning: We understand that innovation often emerges from a combination of successes and failures. In our experimentation, we emphasize transparency and learning, openly sharing our discoveries and insights, whether they stem from successful outcomes or valuable lessons learned from setbacks.
  4. Balancing Risk and Reward: We acknowledge that innovation inherently involves risk. Our pursuit of cutting-edge AI applications is guided by the awareness that while calculated risks are essential, the well-being of our users and communities remains paramount.
  5. Adapting to Complexity: The AI landscape is intricate and ever-evolving. We embrace this complexity with an open mindset, recognizing that adaptability and continuous learning are essential components of responsible AI experimentation.
  6. Safety and Accountability: Our commitment to safety is unwavering. We employ rigorous safety measures to safeguard against unintended consequences and biases, holding ourselves accountable for the impacts of our AI initiatives.
  7. Stakeholder Inclusion: We believe that innovation thrives in a collaborative environment. We engage with stakeholders, partners, and experts, valuing diverse perspectives that enrich our AI endeavors and ensure their alignment with our nonprofit’s mission.
  8. Inspiring Responsible AI Adoption: As pioneers in the nonprofit sector, we aspire to inspire responsible AI adoption. By leading by example, we set a high standard for ethical AI experimentation, fostering positive change within our organization and the broader community.
  9. Acknowledging the Learning Curve: Innovation requires bravery to learn from both failures and successes. We understand that setbacks provide valuable insights, shaping the trajectory of our experimentation in ways that propel us forward.

In embracing the potential of ChatGPT and generative AI technologies, we recognize that every experiment contributes to our collective understanding. We are committed to fostering a culture of exploration while remaining vigilant in upholding ethical standards, mitigating risks, and promoting responsible AI practices.

By acknowledging that the path to innovation is illuminated by the interplay of risk and learning, [Nonprofit Name] takes its place at the forefront of responsible AI experimentation, contributing to a future that is both innovative and ethically sound.

[Your Name] [Your Title] [Nonprofit Name] [Date]