Artificial Intelligence (AI) is rapidly entering the charity and not-for-profit sector, offering powerful tools to help organisations increase their impact with limited resources. While AI presents significant opportunities, it also introduces new risks.
AI can reduce days of work to just moments. It can assist with drafting funding applications, analysing data, or responding to supporter queries. In a sector where time and resources are often stretched, tools that enable more to be done with less are understandably appealing.
However, this creates a tension. Charities are trusted with sensitive data, vulnerable communities, and public or donor funding. As such, they must manage risk carefully. Some organisations have already begun experimenting with AI, sometimes without fully understanding the implications.
This blog provides guidance on how charities can explore AI safely and effectively. It outlines the current landscape, common use cases, potential risks, and practical strategies for responsible adoption, including key takeaways from Inde’s sold-out community event in Christchurch, where local charities and not-for-profits gathered to discuss how AI can be harnessed ethically and effectively to amplify their missions.
AI is a broad term that can mean different things depending on the context. In most current applications, AI refers to tools powered by large language models (LLMs), which are trained on vast amounts of text to generate human-like responses.
These tools are already being used to automate administrative tasks, support communications, and improve productivity. In the future, more advanced AI may support decision-making, predict service demand, or help design new programmes.
Understanding the capabilities and limitations of AI is the first step towards using it responsibly. It is important to distinguish between what AI can do today and what remains speculative or experimental.
While AI tools can be incredibly useful, they are not without serious limitations. Understanding these limitations is essential for using them safely and effectively in your organisation.
Current AI reasoning is achieved through techniques such as:
This is why AI tools are not designed to say “I don’t know” when they lack information. Instead, they will often provide an answer – confidently - even if it is incorrect.
Understanding these limitations helps organisations use AI more effectively and avoid over-reliance. It also reinforces the importance of human oversight, especially in sensitive or high-stakes contexts.
AI adoption introduces both technical and ethical risks. These can be grouped into two categories:
For charities, where trust and accountability are paramount, these risks must be carefully managed.
Before adopting AI, it is essential for charities to define their goals. AI should support the organisation’s mission and values, not distract from them. By identifying specific use cases, charities can ensure that AI investments are aligned with their strategic priorities.
Even without access to enterprise licences or sector-specific tools, charities can begin exploring AI in a responsible way. The following strategies can help:
These steps can help charities build a culture of safe experimentation, where innovation is encouraged but not at the expense of trust or integrity.
AI is not just for large corporations or tech companies. It has real potential to help charities and not-for-profits work more efficiently, reach more people, and increase their impact.
However, with that potential comes responsibility. By understanding what AI is, setting clear goals, assessing risks, and implementing practical safeguards, charities can explore AI in a way that is ethical, effective, and aligned with their mission.
The future of AI in the charity sector is about thoughtful, informed adoption. With the right approach, AI can become a valuable tool in advancing social good. If you’d like advice on how to leverage the benefits of AI in your organisation - without the risk - please contact us, we’re here to help.