Nonprofits and charities looking for guidance on policies for responsible use of Generative AI, like ChatGPT, can take inspiration from other public service bodies.
The Canadian government recently issued a guide on the use of Generative AI for public service employees, including the use of this framework:
To maintain public trust and ensure the responsible use of generative AI tools, federal institutions should align with the “FASTER” principles:
FASTER = Fair | Accountable | Secure | Transparent | Educated | Relevant
Fair:
ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations
Accountable:
take responsibility for the content generated by these tools. This includes making sure it is factual, legal, ethical, and compliant with the terms of use
Secure:
ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected
Transparent:
identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; document decisions and be able to provide explanations if tools are used to support decision-making
Educated:
learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
Relevant:
make sure the use of generative AI tools supports user and organizational needs and contributes to improved outcomes for Canadians; identify appropriate tools for the task; AI tools aren’t the best choice in every situation
Read more: