top of page

Fund capacity, not tech: Building a business case for AI in charities (Part 3/4)

  • Writer: Helen Vaterlaws
    Helen Vaterlaws
  • Feb 5
  • 3 min read

Updated: Feb 12

A diverse charity team demonstrating mission impact. This visual supports the strategic reframe of AI costs from 'IT overhead' to 'digital capacity' that increases frontline capacity and beneficiary reach.

For boards and funders, the acronym “AI” usually triggers two immediate reflexes: fear of high costs and fear of unmanageable risk.


That scepticism is healthy. It’s grounded in a sector history of over-promised and under-delivered projects. Indeed, Gartner predicts 40% of agentic AI projects will fail by 2027 due to costs, unclear value, or inadequate risk controls.


However, there is a greater risk to the charity sector right now: the gradual erosion of capacity caused by burnout, administrative backlog, and rising demand. If you walk into a board meeting asking for a software budget, you will likely face resistance. The core argument gets lost in technical debate. To secure buy-in, you need to stop asking for a tool and start making a case for operational capacity.


You aren’t buying AI; you are investing in service resilience.

The Triangulated Charity AI Business Case


In my years leading charity innovation programmes, I found that pitching "innovation" rarely wins over a sceptical Board. Instead, you need to triangulate your pitch to address the three distinct needs in the room:


The Triangulated Business Case framework for non-profits. It illustrates the three psychological needs of a Board: Financial Prudence for the Treasurer, Mission Impact for the Chair, and Governance Safety for the Risk Committee.
  1. Financial Prudence 

    (The Treasurer)


  2. Mission Impact 

    (The Chair/CEO)


  3. Governance Safety 

    (The Risk Committee)


Here is how to frame the argument for each of them.



The Treasurer: The Efficiency Case


  • The Need: Financial prudence

  • The Argument: Cost avoidance


Example Pitch: "This pilot costs £2,000 but automates 520 hours of admin a year. Effectively, this tool purchases capacity at £3.80 per hour, freeing up our technical staff to return to front-line client work."




The Chair: The Impact Case


  • The Need: Mission Alignment

  • The Argument: Equity and Access.


Example Pitch: "Currently, non-English speakers wait 3 days for triage. By using AI to support real-time translation (with staff verification), we can reduce that to 4 hours. This is an equity initiative that allows us to reach marginalised communities without increasing frontline burnout."




The Risk Committee: The Governance Case


  • The Need: Safety

  • The Argument:  Risk Containment.


Example Pitch: "We are not asking for a full rollout. We are requesting an 8-week gated experiment with capped costs. If accuracy drops below our agreed metric, we stop. The financial risk is low, but the organisational learning is assured."




Reality Checks (read before you pitch)


  • Define the "Do Nothing" cost: Remind the board that the status quo also carries a cost in staff burnout, churn, and unserved beneficiaries.


  • Budget for the learning curve: Efficiency won’t happen on day one. Be honest that the first month is an investment in training, not an immediate return.


  • Compliance is non-negotiable: Before you pitch, ensure the tool aligns with Information Commissioner's Office guidance (or your regional equivalent).



Evidence from the Field: Citizens Advice


If you need a concrete example to support your business case, look at the “Caddy” pilot run by Citizens Advice. Facing rising demand and growing staff burnout, the team tested an AI assistant to help supervisors draft case notes. The results demonstrate a clear, triangulated business case:


  • Efficiency: The tool reduced the average time needed to review a case from around ten minutes to roughly four.


  • Impact: By freeing up supervisor time, the pilot increased capacity to support more frontline advisers enabling the service to help more vulnerable people.


  • Governance: Around 80% of AI-generated responses were of sufficient quality to be passed directly to advisers without revision. The remaining cases were flagged for human edits, allowing supervisors to manage risk while still capturing the majority of the efficiency gains.


Conclusion: Service Stewardship


Securing funding for AI isn’t about chasing hype; it is about demonstrating service stewardship. By treating AI as a hypothesis to be tested rather than a solution to be installed, you ensure that technology serves your mission, not the other way around.


Next up: Now that you have the funding, how do you run the pilot safely? In Part 4, I will share the "Map, Measure, Magnify" framework, a guide to low-risk innovation pilots.


Read the full AI adoption for charities series






I’m heading to UNESCO House in Paris, February 2026 for the International Association for Safe & Ethical AI's second annual conference. I’ll be sharing free notes from the event on LinkedIn for those interested.



Note: These insights are based on practitioner experience and do not constitute legal or regulatory advice. Always review your specific funder contracts and data protection policies (GDPR) before making significant changes to your data collection or retention schedules. Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed.

© 2026 Insights2Outputs Ltd. | All rights reserved | Privacy Policy

Disclaimer: This content is provided for informational and illustrative purposes only. It does not constitute professional advice and reading it does not create a client relationship. Always obtain professional advice before making significant business decisions.

bottom of page