top of page

Ethical AI frameworks for Charities: Practical Lessons from UNESCO’s Ethics of AI

  • Writer: Helen Vaterlaws
    Helen Vaterlaws
  • Apr 4
  • 6 min read

Turning ethical AI principles into practical charity governance frameworks


Smiling woman with AI visuals in background. Text: "Recommendation on the Ethics of Artificial Intelligence," UNESCO logo.

UNESCO’s Recommendation on the Ethics of Artificial Intelligence was written for governments and policymakers, not for charity leaders juggling safeguarding reviews, funder reporting and a growing inbox of AI tool pitches.


It was adopted in November 2021, before ChatGPT launched and before most charities had AI anywhere near their agenda. That timing is actually part of its value to charity ethical AI framework design. The principles were developed before the hype cycle, which means they are grounded in ethics rather than reactivity.


If your charity is adopting, piloting or simply considering AI tools, UNESCO's report offers a useful ethical reference point alongside your existing legal and regulatory obligations. The challenge is turning those principles into practice.


Below, I set out the main lessons charities can take from UNESCO’s recommendations, along with some practical steps you can act on now.



Why are charities well placed to lead on ethical AI?


Charities are not starting from scratch on ethical AI as they already operate under governance structures built around trust and accountability. The third sector has extensive experience running safeguarding reviews, navigating consent and carefully managing power dynamics.


  • Safeguarding boards already model human oversight of high-stakes decisions.


  • Equality impact assessments already test for fairness.


  • Beneficiary feedback mechanisms already provide transparency and accountability.


AI requires applying existing charity values consistently to a new category of tool.



At a Glance: Charity-Relevant Takeaways From UNESCO’s Ethical AI Recommendations


Definition: UNESCO’s Recommendation on the Ethics of Artificial Intelligence is a non-binding international framework covering values, principles and 11 policy action areas. It applies across the AI lifecycle, from design and procurement through deployment, monitoring and retirement.


Opportunities

(if you align your AI approach with these principles)


  • A ready-made ethical reference point you can use in AI policies, board papers, funding bids and supplier conversations.

  • Shared language for discussing AI ethics with partners, commissioners and regulators.




Risks

(that can arise when charities adopt AI without clear guardrails)


  • Personal data processed without clear privacy, security or retention controls.

  • AI outputs influencing high-stakes decisions without meaningful human review.

  • Bias, inaccuracy or poor fit going unnoticed where teams are small or capacity is stretched.




Actions to take


  • Ethical foundations: Run and document a proportionate ethical impact assessment before piloting or scaling any AI use.

  • Oversight and transparency: Set named human review points with clear accountability, and be clear when AI is being used.

  • Fairness and review: Test outputs for bias against the communities you serve, then monitor performance over time and act on issues.


How should charities assess AI ethics risk?


What UNESCO says: Member states should put ethical impact assessment frameworks in place to identify and assess the benefits, concerns and risks of AI systems, especially where marginalised people or people in vulnerable situations may be affected.


Charity implication: Many charities are still experimenting with AI informally, sometimes through individual enthusiasm rather than organisation-wide governance. Without a proportionate assessment, it is easy to miss risks that only become visible once a tool is embedded in a workflow.


Things to consider:


  • Before piloting any AI tool, ask three questions: Who could be affected? What could go wrong? How would we know?


  • Map where AI touches beneficiary-facing or decision-influencing processes. Even back-office tools can have downstream effects if they shape the information staff act on.


  • Document the assessment, the risks identified and the steps you will take to reduce them. A written record supports accountability and helps you learn from each deployment.


What is ethical drift in AI adoption?


There is a subtler risk that is worth naming: ethical drift. The first AI tool your charity pilots will probably get careful scrutiny. The second will get a lighter touch. By the fifth, the assessment may be a conversation over coffee. Building a repeatable, proportionate assessment process now is what stops good intentions from quietly eroding over time.



What data and privacy safeguards does charity AI need?


What UNESCO says: Privacy must be respected, protected and promoted throughout the AI lifecycle. Data should be collected, used, shared and deleted in line with applicable law. Individuals should have meaningful rights over their personal data, supported by transparency and accountability, and where appropriate, meaningful consent.


Charity implication: Charities often hold highly sensitive information: safeguarding disclosures, health conditions, immigration status and financial hardship. If an AI tool processes personal data in the cloud, you need to know where the data is stored, who can access it, whether it is used to train models and what happens to it after processing.


Things to consider:


  • Review your data processing agreements and supplier terms with any AI vendor. Check storage location, retention, sub-processors, model training and deletion arrangements.


  • Don’t enter personal, sensitive or confidential information into open-access or free-tier AI tools. Where AI processing of such data is needed, use an approved tool with a data processing agreement, appropriate security controls and a documented lawful basis.


  • Check the lawful basis and any additional condition before using AI with beneficiary information, and involve your DPO or legal adviser.


Why does human oversight matter for charity AI decisions?


What UNESCO says: Accountability for AI use should remain with identifiable people or organisations, and human oversight should be meaningful rather than a token sign-off. For higher-risk or irreversible decisions, humans should retain the ability to review, override or stop AI-supported outputs.


Charity implication: This principle matters profoundly for charities. AI may influence decisions about eligibility for support, safeguarding triage, service prioritisation or the information frontline staff act on. When teams are stretched, the temptation is to use AI to speed things up. However, if no one has the time, authority or expertise to review the output properly, speed can turn into silent harm.


Things to consider:


  • Identify the decisions in your workflows that could materially affect someone’s rights, safety, access to support or wellbeing. These are your human-review points.


  • Build review checkpoints into AI-assisted processes and make sure reviewers have enough context, training and authority to challenge the AI output.


  • If your team cannot meaningfully review the output, pause or narrow the use case rather than deploying it as-is.


How can charities prevent AI bias and discrimination?


What UNESCO says: AI actors should promote social justice and safeguard fairness and non-discrimination. They should make reasonable efforts to avoid reinforcing or perpetuating discriminatory or biased applications and outcomes throughout the AI lifecycle.


Charity implication: AI tools can reflect biases in the data they were trained on, or in the way they are used, and those biases can affect the very communities charities aim to serve. A tool that works well for one group may perform less well for another. A language model may also produce outputs that reflect narrow cultural assumptions.


Things to consider:


  • Test AI outputs with the diversity of your service users in mind. If your charity works with people who speak multiple languages or have specific accessibility needs, check how well the tool performs for those groups.


  • Watch for proxy discrimination. An AI system may not use protected characteristics directly, but it can still produce biased outcomes based on postcode, language patterns or other correlated data.


  • Build feedback loops so frontline staff can flag when outputs seem unfair, inaccurate or poorly suited to specific communities. This should be an ongoing process, not a one-off check.


Next Steps: Turning UNESCO’s Framework into Charity Governance


  • Map current use: Identify every AI tool or AI-enabled feature in use, including tools built into CRM, fundraising, case management or productivity systems.


  • Classify the risk: Separate low-risk uses from higher-risk uses that may affect people’s rights, safety, access to support or personal data.


  • Assign accountability: Name who is responsible for approving, monitoring and reviewing each use case.


  • Set boundaries: Decide where human review is required, what data must never be entered, and which uses should not go ahead without further assessment.


  • Review vendors: Check supplier terms, retention settings, training policy, security controls and exit arrangements before wider rollout.


  • Document and train: Record decisions in a simple AI register or risk log, and make sure staff understand the rules.


A quick sense-check: if your charity has an AI policy, read it back with UNESCO's principles in mind. Does it address whether AI outputs are fair across the communities you serve? Have you assessed the ethical implications, not just the data protection ones?

If your organisation is navigating AI decisions and wants to make sure governance keeps pace, or you’re unsure whether your current approach covers the ethical ground, book a free 20-minute AI governance check-in.




Note: These insights are based on practitioner experience and do not constitute legal or regulatory advice. Always review your specific funder contracts, data protection policies (GDPR) and safeguarding policies before making significant changes to operations. Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed.

© 2026 Insights2Outputs Ltd. | All rights reserved | Privacy Policy

Disclaimer: This content is provided for informational and illustrative purposes only. It does not constitute professional advice and reading it does not create a client relationship. Always obtain professional advice before making significant business decisions.

bottom of page