Ethical AI for Co-Production in Charities
- Helen Vaterlaws

- Oct 2, 2025
- 4 min read
Updated: Jan 3
Practical ways to widen access and cut admin without losing the human core

If you’re experimenting with co-production in research or service design, you’ve probably already bumped into AI tools. Live captions, auto-transcription, smart summaries promise time back and wider access, but they also raise very real questions about consent, safety, and trust.
For co-production, AI should be an amplifier, not an autopilot. It can widen access and speed up the boring bits, but it must never replace human judgement or lived experience.
This mini-guide sits alongside our main piece on co-production in research and focuses on one thing: how to use ethical AI for co-production in a way that protects relationships and keeps people firmly in control.
What we mean by ethical AI for co-production
When we talk about ethical AI here, we mean:
AI as a supporting tool, not the decision-maker.
People know when it’s being used, and can opt out.
Any AI output is checked and shaped by humans, especially participants.
Data is minimised, protected, and never used to train external models.
If those pieces aren’t in place, it’s not ethical enough for co-production work.
Quick wins: access
Used carefully, AI can help more people take part in co-produced research and design. You can use ethical AI to:
Add live captions so online workshops work better for deaf and hearing impaired participants, people in noisy spaces, and those joining from mobiles.
Create easy-read rewrites of consent forms, briefings, and summaries, which you then human-check with participants.
Generate multilingual summaries so people can read the key points in their preferred language before or after a session.

Two golden rules:
Always offer non-AI routes (e.g. human interpreters, phone calls, printed materials).
Always review, cross-reference, and confirm AI outputs before sharing or relying on them.
Quick wins: admin
Ethical AI can also reduce admin, so you have more capacity for the human parts of co-production.Helpful places to start:
Auto-transcription with human-checked themes
Use transcription tools to generate a draft, then theme with participants or a mixed team.
Treat AI as a fast first pass, not the final answer.
Scheduling, reminders, and action capture
Let tools handle diaries, reminders, and basic follow-ups.
Turn actions from workshops into shared lists that everyone can see and update.
The point is to reduce the admin burden, so more of your time is spent listening, reflecting, and deciding together.
Guardrails: non-negotiables for ethical AI in co-production

To keep co-production safe and trustworthy, build these guardrails in from the start:
Informed consent: Be clear in consent forms and briefings if AI tools are used (for transcription, captions, summarising, etc.), and why.
No model training on participant data: Don’t allow tools to train on transcripts or notes from lived-experience sessions.
Right to opt out: Always offer a no-AI alternative for people who prefer it (for example, manual note-taking or human-only translation).
Data minimisation: Redact names and identifiable details by default. Only keep what you genuinely need for the project.
Human validation: Make sure participants (or an advocate group) can check and correct AI-generated summaries or themes before they feed into decisions.
DPIA & ethics log: Keep a simple record of which tools you’re using, what they do, where data is stored, how long you keep it, and who to contact with concerns.
These are the basics that make ethical AI for co-production defensible under scrutiny.
Why ethical AI for co-production matters

When you use ethical AI for co-production well, you widen access, cut admin, and protect the human core of the work. People can see and shape how their words are used. Teams get time back to focus on conversation and sense-making, not just note-taking. And funders or ethics panels can see clear, proportionate guardrails. Done badly, AI can erode trust, confuse consent, and drown out the very voices co-production is supposed to centre. The difference isn’t the tool, it’s the design.
Quick Q&A: ethical AI for co-production
1. What is ethical AI for co-production in plain language?
It’s using AI tools as a helper (for captions, transcription, summaries or admin) while keeping people fully informed, able to opt out, and in control of how their words are interpreted. AI supports the process; it doesn’t make decisions or replace lived experience.
2. Where can AI genuinely help co-produced research or design?
The safest, most useful spots are access and admin: captions, easy-read text, multilingual summaries, transcription, theming drafts, and scheduling. Anywhere output is later checked and shaped by humans is usually a better fit than direct decision-making.
3. What should we include about AI in consent forms?
Explain which AI tools you use (for example, transcription or translation), what they do, how data is stored, and how someone can say no to AI and still take part. Plain language beats legalese every time.
4. How do we handle people who don’t want AI used on their data?
You respect it. Offer a non-AI route (manual notes, human translators, low-tech materials) and make sure it is just as valued. Co-production is about shared power; forcing AI on people undermines that.
If this all feels familiar, our main post on co-production in research for charities digs deeper into guardrails, gates, and success signals and might help.
Change doesn’t start with a workshop; it starts with one honest conversation.
Note: Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed. AI systems can be unpredictable, so always keep personal or sensitive data out of third-party tools and ensure your implementation follows your own organisation’s data protection policies.


