top of page

The future of impact: decentralised evaluation for charities by 2035

  • Writer: Helen Vaterlaws
    Helen Vaterlaws
  • Apr 17, 2025
  • 7 min read

Updated: Jan 2

Group of diverse people in vests holding an "Impact Report" sign, smiling outside a modern building, suggesting collaboration and positivity.

Charities are juggling a lot. Budgets are tight, teams are stretched, and funders want more evidence with fewer resources. At the same time there is growing interest in decentralised impact evaluation for charities because it promises more trust and more community voice without adding more admin. The buzz around AI and blockchain can be noisy, but underneath it there are practical tools that are getting cheaper and easier to use, and that can help you tell your charity’s story more clearly.


Why current metrics fall short


Traditional metrics capture activity, but the meaning often gets lost. They miss the mother who finally feels in control or the young person who decides to show up again. Those are real outcomes, but they are often buried beneath KPIs that sit far from day-to-day delivery. The challenge for charities is measuring what truly matters, even when it is hard to quantify.


The 2035 vision: communities in the driver’s seat


Group of people shaking hands in a circle, smiling, celebrating success.

Imagine that by 2035 your charity’s evaluation feels less like an audit and more like an ongoing conversation. Success is not set only by funders or executives, but by the people you serve. Decentralised impact evaluation is simply this: communities help define and govern what counts. Feedback is collected little and often, and change is visible in real time.


  • Pulse logging: people share short reflections in the channel that suits them.

  • Real-time insight: simple AI tools group what people are saying so you can adapt quickly.

  • Community governance: beneficiaries help decide which changes matter most.

  • Transparent records: whether it is a shared database or blockchain, you have a traceable record donors and communities can trust.


Why decentralised impact evaluation for charities makes sense now


Decentralised impact evaluation for charities is not new. Many organisations have wanted to listen more deeply and reflect what communities value. What is different now is the timing.


Four diverse students in a classroom having a discussion together, collaborating and learning.

Tools that once felt out of reach are becoming more affordable and easier to use, even for stretched teams. You do not have to do it all at once. If you start small now, you lay the foundations for evaluation that serves your mission instead of adding to the admin. You also start to collect richer, more authentic stories that funders and donors can connect with.


This is not a tech fantasy. It is a mindset shift. The good news? No coding required.


Avoiding the tech hype: decentralised governance should also start with people


Hand holding smartphone with app interfaces; browsing the app for information and insights.

You do not need to jump into complex tech. In fact, you should not. AI, blockchain and other tools will not fix unclear impact measures on their own. Used well, they can support your work. Used badly, they can add work or exclude the people you most want to hear from.



Start by asking:


  • Does this tool meet a real need in our charity or is it just a trend?

  • Is it accessible for the people we work with, not just the digitally confident?

  • Is there a simpler, lower-tech way to get the same result?


If it feels over-engineered, it probably is. The right solution is the one your team and community will actually use.


A practical roadmap to 2035


Flowchart with four steps: Listen Deeply, Uncover Insight, Share Decisions, Scale Governance. Icons and text above each step on a curved path.

You do not need a tech team or a big budget to start. You already have what you need: your team, your community and a willingness to listen differently. Work in phases and let it grow with your organisation.


Phase 1: Listen Deeply (0–6 Months)


  • Action: Start small. Pilot a simple pulse log in one programme, using whatever format works.

  • Tools: Google Forms, SurveyMonkey, paper notes, SMS.


Tip: Share back early: “You said X, so we are exploring Y.” That builds trust.


Phase 2: Uncover Insights (6–18 Months)


  • Action: Once feedback is coming in, look for patterns and take them back to the community: “Does this reflect your experience?”

  • Tools: Insight7 (free tier), Google Sheets, even spreadsheets can go a long way.


Tip: AI can help, but always sense-check themes with people.


Phase 3: Share Decisions (1–2 Years)


  • Action: Test a small decision-making process where the community chooses between priorities. Keep it simple and familiar.

  • Tools: Slido, Loomio, or Snapshot if you want secure voting.


Tip: Always offer offline or staff-supported options.


Phase 4: Scale Governance (2–5+ Years)


  • Action: Set up a community advisory group and make clear how their input is used.

  • Tools: Airtable, Looker Studio, or more advanced options if you are ready.


Tip: Learn with others through networks so you do not build it alone. Co-develop tools or approaches with others in your field to scale impact affordably and sustainably.


Note: Although this model is adaptable across many settings, it won’t be right for everyone. Community-led governance must always be balanced with safeguarding, timing, and the lived realities of those you serve.


Navigating the risks that matter


Group of people smiling, one in a "VOLUNTEER" shirt holds a clipboard. They discuss inside a bright room, creating a positive, cooperative mood.

Even good ideas can create pressure if they are not designed for real life. Decentralised impact evaluation is no different. These are the risks charities run into first, and simple ways to keep them small.




Time and Capacity Pressures


Risk: Even light pulse logs take time.


  • Start Small: Focus on a single program or one key question to reduce complexity.

  • Leverage Support: Engage volunteers, university partners, or short-term interns to lighten the load.

  • Plan for Longevity: Include evaluation time in staff schedules and embed costs into funding bids.


Digital Exclusion


Risk: Not everyone can or wants to use online tools.


  • Use Multiple Channels: Pair tech tools with paper surveys, in-person check-ins, or visual prompts.

  • Offer Support: Provide guided assistance, translated materials, or voice-based options.

  • Test First: Pilot new tools with a small group and tweak based on barriers they experience.


Privacy and Data Trust


Risk: People will not share if they do not feel safe.


  • Keep It Anonymous: Default to no-name options where possible.

  • Choose Tools Carefully: Always use secure, reputable platforms with transparent settings.

  • Communicate Clearly: Use consent scripts that are plain, friendly, and culturally sensitive.


Bias in Feedback and Analysis


Risk: Vocal groups or AI tools can skew what you see.


  • Broaden Voices: Use targeted outreach to include quiet, under-represented groups.

  • Sense-Check Outputs: Always review human or AI-generated insights with your team and, if possible, with participants.

  • Up-skill Teams: Offer basic training on recognizing and mitigating bias.


Funder or Board Resistance


Risk: Some people still prefer traditional numbers.


  • Present Both: Pair traditional indicators with community narratives (“90% satisfaction rate + key themes from feedback”).

  • Show Progress: Share pilot wins, even small ones, to demonstrate value.

  • Bring Credibility: Collaborate with independent researchers or evaluators to validate the process.


Over-Reliance on Technology


Risk: Letting the tool lead can make things harder.


  • Stay Human-Centered: Let relationships drive your approach, not the tool.

  • Scale Carefully: Don’t move to blockchain or AI until you're confident the basics are solid.

  • Pause and Review: Regularly assess whether tools are still meeting your needs.


Community Fatigue or Disengagement


Risk: Too many asks or not closing the loop will switch people off.


  • Keep It Light: One or two focused questions is often enough.

  • Close the Loop: Always show how feedback shaped decisions.

  • Pace It Out: Space out requests and avoid busy periods for your audience.


Quick wins you can feel now


It is hard to plan for 2035 when you are dealing with 2025 pressures. The good news is that small, community-led steps can help right away.

You can:


  • Ease staff burnout: let participants log their own pulses on paper, SMS or simple forms so staff spend more time on relationships, not reports.

  • Rebuild donor confidence: share real stories in real time so you can show impact without writing another long report.

  • Lift morale: reflect feedback back to staff and volunteers so they can see the difference they make.

  • Grow reach: when people share positive moments, others listen. Those authentic voices travel further than KPIs.


You do not have to wait ten years for change. You will feel it as soon as people see their feedback shaping what happens next.


Quick Recap


Start small. Keep it simple. Build feedback into what you already do, and let community voices guide the way. Decentralised evaluation isn’t about tech. It’s about trust, timing, and listening like it matters. Tech is just a tool to help you do that more efficiently.



FAQ on decentralised impact evaluation for charities

1. What is decentralised impact evaluation, in plain language?

It is a way of doing impact where the people you serve help define what counts, feed in little and often, and can see how their feedback is used. It should feel more like an ongoing conversation than an audit. In the blog post above that looks like pulse logs, simple AI to group what people are saying, and shared records that funders and communities can trust.


2. Do we need blockchain or AI to start?

No. You can start with low-tech tools your team already knows, such as Google Forms, SMS, paper notes or simple shared sheets. Tech like AI or blockchain is only useful once your basic process is tidy and people are actually giving feedback. Start with what your staff, volunteers and communities will use.


3. How do we avoid digital exclusion if we ask for more feedback?

Offer more than one way to take part. Pair online tools with paper, in-person check-ins or staff-assisted sessions. Test it with a small group first and keep the language simple. The goal is more voice, not more admin.


4. How do we bring funders and boards with us?

Show both. Keep your core indicators and add what people are saying. For example: “90% satisfaction this quarter, and the top themes from community feedback were X, Y, Z.” Share small pilot wins and, if needed, involve an external person to sense-check the approach.


5. We are already stretched. What is the smallest version we can run?

Start with one service, one question and one channel. Ask something like “What helped most this week?” Collect it in the simplest way you can, then tell people “you said this, so we did that.” That is a decentralised habit in miniature, and it is enough to begin.



Change does not start with a workshop; it starts with one honest conversation that builds trust and momentum.




Note: Examples are for illustrative purposes only; no official affiliation with the organisations or tools mentioned is claimed. AI systems can be unpredictable, so always keep personal or sensitive data out of third-party tools and ensure your implementation follows your own organisation’s data protection policies.

© 2026

Insights2Outputs Ltd.  

All rights reserved.

Disclaimer: This content is provided for informational and illustrative purposes only. It does not constitute professional advice and reading it does not create a client relationship. This includes our AI frameworks, which are designed for strategic experimentation. Always obtain professional advice before making significant business decisions.

bottom of page