Blog Home   >   Risk considerations for nonprofits looking to use artificial intelligence (AI)

Risk considerations for nonprofits looking to use artificial intelligence (AI)

If your nonprofit is trying to make the most of a small staff and limited budget, you may be tempted to turn to artificial intelligence (AI) solutions to organize data more efficiently, generate marketing materials and target specific donors during fundraising campaigns.

But while AI can be a powerful tool, it can also lead to a fair amount of risk for your organization. Here, Church Mutual risk control specialists detail four of the biggest risks you may encounter:

  1. Neglecting to thoroughly vet an AI vendor. Over the past year, many pop-up technology companies that claim to offer AI solutions have emerged. But before you sign a contract with one of these companies, investigate the company’s credibility, longevity and history. It may also be helpful for one of your staff members or volunteers to take a course in using AI for nonprofits.

  2. Implementing an AI system that shows bias. As AI has become more prevalent in our society, users have discovered that some systems can incorporate bias into their results.

There are three different types of bias you may encounter:

    • Data bias – when data or information is limited, painting an inaccurate representation of a certain population.

    • Algorithmic bias – when a system results in unfair outcomes because of limited input data or unfair algorithms.

    • Confirmation bias – when an AI system relies too much on pre-existing beliefs or trends in the data.

It’s important to always examine output from the system for any potential signs of bias.


  1. Falling victim to a hacker. Just like with any other technological or web-based tool, you need to be very careful about your cybersecurity. Ensure that sensitive and personal data is anonymized, encrypted or protected in another way when you are using AI tools. Additionally, only a small number of people in your organization should have access to the AI system. Establish a written process that specifies an employee or volunteer must obtain permission from the executive director before utilizing data from the system.

  2. Plagiarizing other people’s or organizations’ work. When you use an AI system to generate marketing material or a grant application, you run the risk of that system using others’ words. Instead of having the system write the complete piece, use its research as a starting point for a human writer, who will then add their own insight and research.

If you are considering incorporating AI into your nonprofit’s various functions, the most important step you can take right now is to create a written policy. Don’t wait until issues come up — anticipate them now by talking with other organizations that have experience using AI.

When used cautiously, these new tools can be a game-changer for a nonprofit that wants to maximize its resources. But take your time, because you could wind up wasting a lot of time, money and resources on an AI system that creates more problems than solutions.

For more information, resources and tips on pressing issues facing nonprofit and human services organizations, visit