AI-generated Images: The Imagery Problem Charities Haven’t Noticed Yet
- Helen Vaterlaws

- 1 day ago
- 4 min read
New research by the University of East Anglia describes the sector's"high-tech shortcut" to empathy as backfiring.

If your charity has used AI-generated images in a campaign, a social media post, or a fundraising appeal in the last twelve months, you are not alone.
With funding pressures increasing, resources stretched and service demand continuously growing, it is understandable that charities might look to these new technologies to help increase impact while minimising costs.
However, the use of AI-generated images does raise practical and ethical questions which require careful consideration and governance.
Here I discuss the implications of the new research findings and how the emerging EU Code of Practice on Transparency of AI-Generated Content may help shape how the sector responds.
What the Research Found: AI-Generated Images Across 17 Charities
The study, titled Artificial Authenticity, analysed 171 AI-generated images published by voluntary organisations ranging from major international development charities to small grassroots charities between 2023 and early 2025. Three findings stand out:
Disclosures were inconsistent - Over 10% of images had no AI credit or disclosure. Where disclosure was provided, the quality varied enormously, from prominent labeling to fine-print attributions easily overlooked by scrolling audiences.
Imagery reproduced harmful visual tropes - Rather than breaking away from long-criticised patterns of depicting poverty in reductive or dehumanising ways, AI-generated images replicated them.
Impact on public conversation: Of the 472 comments analysed across six charity campaigns, only 80 discussed the charitable cause. The rest focused on AI ethics, technical quality, and authenticity debates.
Why AI-Generated Images Are a Governance Risk for Charities
Although the research focused on international development organisations, the lessons apply across the sector.
AI image generators are now embedded in everyday tools, with many platforms offering generated imagery by default. A communications officer producing a campaign asset may not realise the image selected was never a photograph. As these tools advance, the distinction between stock photography and synthetic imagery is becoming harder to detect.
Emerging research reinforces these concerns. Studies suggest donors are significantly more likely to contribute to campaigns created by humans rather than AI. Researchers have also found that awareness that content is AI-generated can also reduce empathic responses, lowering intentions to donate.
This raises new charity governance risks:
Safeguarding - AI-generated images raise safeguarding questions that existing image policies were never designed to address.
Reputational - If supporters, journalists, or regulators identify undisclosed AI imagery in charity materials, the impact can be immediate.
Ethical - Publishing synthetic images of the communities you serve without transparency risks undermining the authenticity. It also raises wider questions around copyright, impacts on creative work, and environmental cost.
The UEA researchers noted that, at the time of their study, internal charity guidelines, had not kept pace with the speed at which AI image generation tools had been adopted by communications teams.
Another key finding was that even when charities disclosed their use of AI imagery, public backlash still occurred. More research is needed, but the emerging data suggest that the legitimacy of AI imagery use appears to depend on both disclosure and whether its use is aligned with your mission, values, and beneficiaries' dignity.
The researchers concluded that transparency functions as a necessary ethical baseline, but should not be viewed as the full solution.
Beyond Transparency: Rethinking AI Imagery Through Co-Creation
The UEA study also surfaces a more constructive path forward. If choosing to use AI-generated imagery, organisations should co-create it with local communities by involving them in the creative process, including generating AI prompts and approving final imagery to ensure they are accurate and culturally appropriate.
Co-created AI imagery could empower communities to decide how they are represented; giving local organisations tools to produce their own images at scale could meaningfully reshape how AI visualises communities.
However, this approach introduces more ethical questions: resource access, power dynamics, and sustainability. It also requires investment which many charities would struggle to resource.
The EU Code of Practice: What Could UK Charities Learn about AI Transparency

While the charity sector works through these questions, regulation is catching up. The EU’s second draft Code of Practice on Transparency of AI-Generated Content, published under the EU AI Act, is currently open for stakeholder feedback.
The draft Code sets out requirements across four commitments that are relevant to any organisation using AI-generated content, including charities: disclosure and labelling, documented internal processes, staff awareness and training and accessibility.
For UK charities, the Code potentially offers a useful framework for thinking about what good governance around AI imagery looks like, even if it is not a formal regulatory requirement.
I will be doing a deeper dive into what UK charities can learn from the EU Code of Practice. Follow me on LinkedIn to stay updated.
In the meantime, a practical place to start is to ask your communications team whether any imagery currently live across your website, social channels, or campaign materials was AI-generated. The answer will reflect how quickly these tools have become embedded in everyday workflows and provides a good starting point for governance discussions.
The Bigger Picture: AI Governance for Charities Is Evolving Fast
This is not a call to ban all AI-generated images. There are legitimate uses, particularly where photography is impractical, where real individuals cannot be identified for safety reasons, or where illustration better serves the message.
The UEA study itself noted supportive public responses where AI was used to avoid re-traumatising vulnerable subjects or to protect identifiable individuals. However, the research also makes clear that even well-intentioned AI imagery use carries risks if it is not governed thoughtfully.
Robust governance of AI in charities is essential to realising benefits while maintaining safety and ethics. For more guidance on robust AI adoption in charities read:
AI Governance for Charities: Practical Lessons from the International AI Safety Report 2026 - Deep dive into governance frameworks and structures.
Safe AI Innovation in Your Charity - A Step-by-step implementation guide for responsible AI adoption.
If your organisation is navigating AI decisions and wants to ensure governance keeps pace, or you're unsure how to safeguard AI use, book a free 20-min conversation about AI safety and governance.
Note: These insights are based on practitioner experience and do not constitute legal or regulatory advice. Always review your specific funder contracts, data protection policies (GDPR) and safeguarding policies before making significant changes to operations. Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed.


