top of page

AI Governance for Charities: Practical Lessons from the International AI Safety Report 2026

  • Writer: Helen Vaterlaws
    Helen Vaterlaws
  • 8 hours ago
  • 7 min read
Cover of "International AI Safety Report 2026" "February 2026" is at the bottom.

The International AI Safety Report 2026 is written for policymakers wrestling with an uncomfortable dilemma: AI capabilities are accelerating, while evidence about their risks remains incomplete.


It may not be written with charity leaders in mind yet the dilemma it describes, how to govern a fast-moving technology amid uncertainty, closely mirrors the tensions many in our sector already face.


Charities, too, are navigating the balance between innovation and caution, opportunity and harm, mission impact and public trust.


The report’s core message is balanced: general-purpose AI is already useful in many domains, but evidence about risks is uneven and governance needs to operate under uncertainty. Below is what I think charity decision-makers should take from the report, with practical steps you can act on now.


At a Glance: Charity Relevant Takeaways From the International AI Safety Report


Definition: General-Purpose AI refers to tools that can perform a wide variety of tasks across contexts (like ChatGPT, Claude or Gemini) and is used in multiple policy frameworks such as the EU AI Act.


Opportunities

(if you put the right guardrails in place)


  • Capacity back for greater impact (think summarising and admin)

  • Better internal knowledge access (finding answers in policies, procedures, past documents)




Risks

(that can become governance issues quickly)


  • Reliability gaps (confident wrong answers; missed nuance)

  • Trust & safeguarding risks (manipulation, over-reliance, inappropriate advice in vulnerable contexts)

  • Fraud at scale (deepfake voice/video, impersonation of CEOs, partners, beneficiaries)



Actions to take


  • Phase 1 Governance Foundations: Publish policy, tools register, DPIA triggers, fraud protocol.

  • Phase 2 Testing & Embedding: Pilot one tool with your team; refine controls based on real use.

  • Phase 3 Maturity: Integrate into staff training; audit for drift or new risks.



AI’s Strengths Are Real, But Uneven


General-purpose AI systems can do some remarkably advanced things, like writing computer code, creating lifelike images, and answering complex questions in maths and science. However, they can still struggle with tasks that seem simpler to people, such as counting objects in a picture or spotting and correcting basic mistakes in longer pieces of work. The report also highlights potential under-performance in languages or cultures underrepresented in their training data. This can mean whole groups of people are overlooked or poorly served by today’s AI.


Charity implications: AI might draft a donor letter in minutes but could insert errors that cost you trust if they slip through. The time you save upfront can vanish fast when you’re double-checking or apologising later.

Note: Evidence strength varies. Risks like deepfakes and cybersecurity threats have robust empirical support. Broader labour market impacts rely more on modelling and theoretical analysis.


Testing Doesn’t Guarantee Real-World Results


The report highlights an emerging evaluation gap. This happens when AI development companies (model providers) are testing their models in labs before releasing them to the public. These test environments often use data the system has already seen, or are conducted in controlled lab settings. Potentially this could mean that AI models may look more capable in controlled tests than it performs in your specific workflows, especially when tasks are messy, long-running, or involve real users and constraints.


Indeed, the report notes emerging research suggesting some models can behave differently in evaluation settings than in deployment-like conditions (p.79; Schoenn et al., 2025). However, the report identifies this as a testing methodology problem, not evidence that systems fail once deployed. It simply reinforces why you should test any tool in your own environment before relying on it.


Text illustrates AI evaluation insights with two comparison boxes: dark and light themes. Main text emphasizes potential AI task awareness.
Screenshot from International AI Safety Report 2026 (p. 79) illustrating findings from Schoenn et al. (2025) on AI model sandbagging during evaluation.

Charity implication: When you’re procuring AI tools, pilot any system with your actual data and workflows before scaling.

Where AI Can Help Charities (With Guardrails)


The International AI Safety Report is not anti-AI. It highlights measurable productivity gains in areas such as customer service and writing. For charities, which are often stretched thin, these tools may help free up capacity for more impactful work.


Drafting and summarising routine content


What the report says: Current AI models often do well on well-scoped tasks; performance becomes less reliable as tasks become longer/more complex.


Things to consider: Assign trained staff to review outputs and require human sign-off for all public or sensitive material. While thorough checks may reduce time savings initially, they help build trust and may allow you to calibrate oversight based on observed performance, but only where the use case is low-risk and you have monitoring/incident processes in place.


Streamlining admin through workflow integration


What the report says: Productivity gains often come less from the AI tool itself and more from how it is integrated into existing systems.


Things to consider: This requires mapping how AI connects with software (such as Salesforce or bespoke CRMs) and frontline processes, building quality checks at each stage.


Building defences against AI-driven harms


What the report says: The report highlights growing risks from scams and deepfakes. The authors note, people misidentify AI-generated text 77% of the time and audio 80% of the time.


Things to consider: Charities should adopt process controls (call-backs, dual approval) and establish incident-response plans. For example, staff can be trained to verify suspicious emails or calls through a secondary channel before acting.


Why AI Risks Hit Charities Harder: What You Need to Know


The International AI Safety Report discusses a range of risk areas. Three that are especially salient for charities are:


Breach of trust: An erroneous AI-drafted safeguarding letter or donor email can unravel years of credibility.


Beneficiaries may be vulnerable: Charities serve populations at higher risk such as children, elderly people, those in crisis and people with disabilities.


Resource constraints oversight: Small charities may lack dedicated tech/compliance roles. Governance must be lightweight but robust.


How the International AI Safety Report 2026's key risks translate to your charity AI governance policy:


Risk 1: Manipulation as a safeguarding issue


What the report says: Experimental studies suggest AI-generated content may produce measurable shifts in beliefs when tested under controlled conditions.


Charity interpretation: AI support tools deployed to vulnerable beneficiaries may inadvertently create unhealthy reliance or influence harmful decisions without appropriate safeguards.


Things to consider:


  • Explicit labeling: Tools used could be clearly identified as AI-powered, but don’t rely on labelling alone

  • Human escalation: Keep a route to speak with a human.

  • Clear boundaries: Be explicit about what the AI can and can't help with.


Risk 2: Reliability gaps and hidden failures


What the report says: Increasing autonomy may raise the risk of unnoticed errors, including data breaches or misrouted sensitive information.


Charity interpretation: An AI system that hallucinates eligibility data or misclassifies a safeguarding disclosure could cause harm before you're aware of the problem.


Things to consider:


  • Audit trails: Logging what the AI outputs and how staff use it can help identify failures.

  • Spot-checks: Sampling AI outputs periodically (e.g., 5–10% weekly) may catch errors the system misses.

  • Scenario planning: Before deploying, consider what could go wrong and how you'd respond.


Risk 3: Labour shifts and future capacity


What the report says: Authors noted early signs of declining demand for early-career workers in some AI-exposed occupations. However, aggregate employment effects are still unclear.


Charity interpretation: if we automate entry-level “apprenticeship tasks” too aggressively, we may weaken our future safeguarding and service quality pipeline.


Why it matters: Automating entry-level tasks may remove the learning ground where junior staff develop judgment and safeguarding awareness.


Things to consider:


  • Redesign, don't eliminate: Junior staff could validate AI outputs rather than being replaced by them, building pattern recognition skills.

  • Rotate oversight roles: Spreading AI review across teams, rather than centralising it, may build institutional understanding of how systems fail.

  • Record reasoning: Documenting why staff override AI recommendations could preserve decision-making knowledge over time.


Implementation Consideration: How Staff Mindset Shapes AI Outcomes


AI Implementation Considerations For Charity Leaders: Navigating Uncertainty With Your Team


A deliberate limitation of the International AI Safety Report is that it does not provide implementation recommendations. This is intentional by the authors, leaving room for policy, businesses and organisations to translate evidence into context-specific practice. Your charity faces the same dilemma as policymakers: you don't have perfect information about how an AI tool will behave in your specific workflows, with your data, and with your vulnerable users.


This is where staff buy-in becomes critical. If your team doesn't understand why you're adopting AI, or worries about job displacement, implementation often stalls or becomes performative. Staff may skip safety checks, override protocols without thinking, or provide poor feedback during pilots not from malice but because they don't see the point or fear what comes next. Unaddressed skepticism can silently undermine the governance structures you've built.


Things to consider:


  • Be transparent about purpose and impact before pilots begin, explicitly addressing how roles evolve and what job concerns exist

  • Create low-friction feedback loops so frontline users can flag when AI outputs seem wrong, making that feedback valued not punished

  • Spread AI oversight across teams rather than centralising it, building shared accountability and reducing individual bias

  • Acknowledge uncertainty openly, framing pilots as learning opportunities rather than proof points


Next Steps: Stewarding Your Charity's AI Governance Approach


AI offers real potential to ease admin burdens and protect trust, but only if we treat it as a service shift, not a quick fix.


Robust governance of AI in charities is essential to realising benefits while maintaining safety and ethics. Let’s use AI to amplify our care, not dilute it.



Compliance note: The Information Commissioner's Office  guidance is a good starting point for data protection. If you’re outside the UK, check your local regulator and your organisation’s policies.


Change doesn’t start with a workshop; it starts with one honest conversation.






FAQ AI Governance Lessons for Charities 2026

Q: The International AI Safety Report 2026 discusses potential manipulation. What does that mean for charities using AI with vulnerable service users?


A: If your charity deploys an AI chatbot to vulnerable beneficiaries, it could inadvertently create an unhealthy reliance. Explicit labeling ("This is AI") and providing human escalation routes are potential additional safeguards that charities can adopt in addition to existing safeguards.


Q: The International AI Safety Report 2026 flags potential future labour shifts, with some junior roles at risk. Why should charities care?


A: Safeguarding expertise and judgment are often learned through entry-level work. If you automate these roles entirely, you lose the training ground for future staff who'll oversee AI systems.


Q: The International AI Safety Report 2026 emphasizes uncertainty. Shouldn't charities wait until things are clearer?


A: The report’s framing implies that waiting for certainty is not a strategy; organisations may still choose to put governance in place now as waiting may mean rushing to adoption later which risks safeguarding. Better to move slowly, intentionally, and documented now.


Q: Who is accountable if the AI tool causes harm?


A: Accountability remains with the charity; treat AI as a supplier/service with internal owners, and document decisions, controls, and incident response.


Note: This guidance reflects general best practice. However, every charity's legal obligations, funder requirements, and safeguarding needs differ. Before implementing any changes always review your specific funder contracts, data protection policies (GDPR) and safeguarding policies. Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed.

© 2026 Insights2Outputs Ltd. | All rights reserved | Privacy Policy

Disclaimer: This content is provided for informational and illustrative purposes only. It does not constitute professional advice and reading it does not create a client relationship. Always obtain professional advice before making significant business decisions.

bottom of page