top of page

Ethical AI for Co-Production in Charities

  • Writer: Helen Vaterlaws
    Helen Vaterlaws
  • Oct 2, 2025
  • 3 min read

Updated: 12 hours ago

Cloud icon with blue and teal cables on a light blue background, suggesting data connection or cloud computing.

If you are leading co-production in research or service design, you are likely already navigating the "AI dilemma": the promise of efficiency versus the priority of trust. Live captions, auto-transcription, and smart summaries promise time back and wider access, but they also raise vital questions about consent, safety, ethics and data privacy.


For co-production, AI should be an amplifier, not an autopilot. It can widen access and reduce the administrative burden, but it can't replace human lived experience. This guide focuses on one core principle: how to use ethical AI for co-production in a way that protects relationships and keeps people firmly in control.


Defining Ethical AI for Charity Co-production


When I talk about ethical AI in this context, I mean:


Healthcare professional and patient reviewing a tablet
  • AI as a supporting tool, not the decision-maker.

  • Transparency: Participants know when it’s being used and can opt out without penalty.

  • Human oversight: Any AI output is checked and shaped by humans, specifically the participants themselves.

  • Data privacy & sovereignty: Data is minimised, protected, and (where possible) kept out of any model-training pipelines, verified through vendor terms and settings.


If these pieces aren’t in place, the implementation is unlikely to be robust enough for ethical co-production.


Accessibility Benefits: Widening the Circle


Used carefully, AI has the potential to help more people take part in co-produced research and design. You can use ethical AI to:


  • Live captions: Improve online workshops for deaf and hard-of-hearing participants, people in noisy environments, or those joining via mobile devices.


  • Easy-read rewrites: Generate draft versions of consent forms and briefings in plain language, which are then human-verified with participants.


  • Multilingual summaries: Provide key points in a participant's preferred language to ensure they can engage fully before or after a session.


Watch out for bias: AI transcription can struggle with conversational nuances and emotional context. Always ensure a human who was in the room performs the final verification of any transcripts.


Operational Efficiencies: Freeing Up Human Space


Ethical AI can reduce the administrative burden, allowing your team to focus on the human elements of co-production.


  • Auto-transcription with human-checked themes: Use tools to generate a fast first-pass draft, then perform the thematic analysis with participants. Treat AI as a starting point, not the final version.


  • Action Capture: Use tools to handle scheduling and turn workshop discussions into shared action lists that everyone can see and update in real-time.


The goal is to reduce busywork so that more of your time is spent listening, reflecting, and deciding together.


6 Essential Guardrails for AI in Lived Experience Projects


To keep co-production safe and trustworthy, build these guardrails into your project design from the start:


  1. Informed Consent: Be explicit in briefings about which AI tools are being used (e.g., for transcription or summarising) and exactly why they are being used.


  2. Model Training Policy: Use tools and settings that explicitly opt out of model training (and verify this in the vendor’s terms and configuration). Keep use aligned with your data protection and safeguarding policies.


  3. Right to Opt Out: Always offer a non-AI alternative (such as manual note-taking or human interpreters) for those who prefer it.


  4. Data Minimisation: Redact names and identifiable details by default. Only keep what is strictly necessary for the project.


  5. Data Deletion: Be clear what withdrawal means in practice, and ensure your workflow can remove or anonymise a participant’s identifiable data where feasible (in line with your retention and safeguarding requirements).


  6. Human Validation: Ensure participants or an advocate group validate AI-generated summaries before they are shared.


Change doesn’t start with a workshop; it starts with one honest conversation.




Note: These insights are based on practitioner experience and do not constitute legal or regulatory advice. Always review your specific funder contracts, data protection policies (GDPR) and safeguarding policies before making significant changes to operations. Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed.

© 2026 Insights2Outputs Ltd. | All rights reserved | Privacy Policy

Disclaimer: This content is provided for informational and illustrative purposes only. It does not constitute professional advice and reading it does not create a client relationship. Always obtain professional advice before making significant business decisions.

bottom of page