This blog post follows up on a previous post about Everyone Counts, a new initiative founded by World Vision, CARE International and Kwantu. The initiative aims to give marginalised citizens a voice in relation to their satisfaction with public services. You can read the previous post here.Read More
The United Nations Secretary-General’s Independent Expert Advisory Group on a Data Revolution for Sustainable Development (IEAG) recently released it's report - A World That Counts: Mobilising The Data Revolution for Sustainable Development.
The report highlights two big global challenges for the current state of development data:
- The challenge of invisibility (gaps in what we know from data, and when we find out)
- The challenge of inequality (gaps between those who with and without information, and what they need to know make their own decisions)
As the global community grapples with these challenges in the context of monitoring the SDGS, I propose a third challenge to consider:
- The challenge to include the voices of citizens - particularly those often excluded or left behind - in a meaningful way
I’d like to introduce you to a new partnership that we’ve established to try and tackle this third challenge.
Everyone Counts is a new initiative founded by World Vision, CARE International and Kwantu. The initiative aims to give marginalised citizens a voice in relation to their satisfaction with public services (as per Sustainable Development Goal (SDG) indicator 16.6.2). In practical terms this means:
- Assessing to what extent services are meeting their needs and rights
- Identifying problems with the quality of service delivery
- Determining if the SDGs are having the impact they are meant to have in their lives
How will this work? There are some key principles behind the initiative that guide us in our work.
Start with what already works
Our first key principle is to build on what is already working and happening. This is not about launching a new survey or data collection exercise. Kwantu, CARE and World Vision have been working for many years on social accountability. We each hacommunity scorecardsve experience of using a participatory methodology called . Developed by CARE 15 years ago, it has since been taken up and used by Plan, World Vision, the World Bank and many others.
It is a very straight forward methodology: we ask citizens using a public service (a school, clinic or other type of service) to score the quality of services that they received against a set of indicators that they devise. We get service providers to do the same. And then we bring them together to discuss and come up with an action plan to address the identified issues.
Between CARE, World Vision and Kwantu, we are already using it in almost 40 countries and across more than 1,400 facilities. We already have significant numbers just among the founding partners. This will grow as other partners join.
Transformative and empowering, not extractive
Our second key principle is to take a transformative and empowering approach, not an extractive one. It’s critical that data is generated by citizens in a way that is transformative and empowering to them. Community scorecards take a participatory and deliberative approach to identifying indicators. This is not about imposing a set of indicators or questions that have been developed by experts. The process must give space for citizens to determine which issues they feel are important in their own language. The process also seeks to build a constructive relationship between citizens and those delivering public services. It’s primarily about finding local solutions to problems. Citizen-generated data is a by-product.
Use technology to connect work across multiple partners to reach scale
Our third principle is to use technology only as a layer on top of what already works. Technology is a powerful catalyst, but can be a force for positive or negative change. The community scorecard methodology has been widely used and well evaluated. We have evidence that it is effective at improving the quality of public services. We will use technology to make what we know already works, work even better.
Specifically there are three challenges that we will use technology to address:
Comparability: how can we standardise and compare data coming from different organisations, while still allowing for contextual variations?
Quality of data: how can we ensure that data are of high quality and have enough credibility to be taken seriously?
Getting to scale: how can we take data from the community level up to the national level, aggregating data to build the bigger picture?
I'll talk more about how we plan to use technology in a follow-up post. Meanwhile if you’d like to get in touch please leave us a comment or subscribe to this blog. We’ll add you to our mailing list so you can get updates on the initiative.
Models of implementation continue to evolve for many NGOs. One big trend that particularly interests me is the shift from direct implementation of projects and programmes by international NGOs (INGOs) towards working with partners to support and build their capacity. Networks that represent particular groups are often a key part of this model. They may represent groups like young people, older people or those living with or affected by HIV. As structures they can provide legitimacy for participatory monitoring and joint advocacy efforts.Read More
This post is a break from my current series looking at tools to visualise data. Instead I'll write about a tool that you may find useful to collect and organise data from the field.
If you're working at a larger scale across multiple sites then it's really important to collect your data in a more structured and standardised way. If you don't the work involved in aggregating and analysing it is very significant. This is an area that we specialise in helping with.Read More
This is the first in a series of blog posts about our work on data standards. The intention is to present our work and thinking to a wider audience, learn from you about other work that may connect to this and explore new contexts and partnerships in which we can test these ideas.Read More
Standardisation versus customisation. This is one of the key tensions that technologists must balance when developing new technology. Where should the balance lie for M&E technology? What does this mean in practical terms?
These are some of the questions addressed in a short research paper that I presented at the South African Monitoring and Evaluation Association (SAMEA) conference earlier this month. Read on to see my presentation and download the paper.Read More
For many small organisations a spreadsheet is the core of their monitoring and evaluation system. If you're working on smaller scales or with less standardised monitoring approaches, spreadsheets are a great option. They are both flexible and powerful.Read More
Which would you prefer to look at? A never-ending list of numbers or an interactive chart that shows the same data visually?
Presenting data visually can help you quickly see patterns and predict trends. It can also be a powerful way to tell a story with your data.Read More
What's the difference between project and programme management software? For those new to this space they might appear to be the same thing. They both offer the same kind of features: creating projects, assigning responsibilities to people and tracking if things get done or not. However, when you look more closely the differences are quite significant and can play a big role in success or failure of the software as a tool to help you in your work.Read More
This post looks at how to visualise your data on a map. It assumes no special knowledge and it intended for complete beginners. It starts with data in a spreadsheet and explains how to use that to create a map that you can embed on your website or copy and paste into a report.
There are many, many different tools to help you do this. In this post I'll look specificically at MapAList, which is one of the simplest to use. MapAList is a free, simple, codeless wizard for creating and maintaining customized maps.Read More
Open Refine describes itself as 'a power tool for working with messy data, cleaning it up, transforming it from one format into another, extending it with web services and linking it to databases like Freebase.' It was borne out of a project started by Google (and used to be called Google Refine), but is now an open source project hosted on Github.Read More
Before I talk about how workflow can help you, let me first clarify what I mean by workflow. Since the definition on Wikipedia is kind of complex, here is my own attempt at a definition:
Workflow is a set of rules in a software application that define who must do what, when and how to achieve an objective.
To colleagues with an M&E background this may sound similar to an M&E plan. This also sets out objectives for a project or programme. It often describes the activities that will be carried out to achieve those objectives, how they will be measured, when and by who.Read More
Documenting lessons learned from your programme is a key responsibility for monitoring and evaluation advisers. This is typically part of donor reporting requirements, but should also an important part of internal learning and knowledge sharing. However, all too often this section of a report is left to the last minute and is written under time pressure. This can result in something bland that talks in generalities.Read More
Government and NGOs don’t always have a great track record with technology projects. Sadly this affects M&E systems as much as other areas. However, it’s also clear that done well technology can make a big impact. This may come in the form of cost savings (printing and transporting paper around), time savings (saving time spent transcribing data or manually aggregating data) and credibility (being able to quickly access and present data, aggregated at different levels with access to underlying evidence that backs it up).Read More
For many organisations their experience of monitoring and evaluation starts with Excel. In this blog post I look at how Excel (and other spreadsheets) is used as a monitoring and evaluation tool, assessing the pros and cons of this option in relation to database driven alternatives.Read More
In the last blog post I looked at some key features to consider when looking for forms technology to help collect data. However, regardless of which tool you choose, there are some basic usability issues to consider when designing your forms. Following best practices for designing data collection forms can save your colleagues time when entering data and can also play a key role in ensuring that the data entered is accurate and of high quality.Read More
Have you considered using web and mobile based forms to collect data for your development programme? There are numerous benefits to switching, including significant time savings, higher quality data and of course faster access to your data. Not to mention of course savings on printing, transporting and storing lots of paper.
If you're thinking of making the switch then read on for some tips on key features to look out for when selecting technology to help you create and collect data via mobile and web based forms.Read More
Got some free time and want to build your knowledge and skills on monitoring and evaluation? Here are 12 free monitoring and evaluation courses that you can take to increase your expertise. Maybe one will give you to insight or confidence to advance your work to the next level?Read More
Over the last few years there has been a growing drive for development organisations to make better use of technology in their monitoring and evaluation (M&E) systems. This topic has featured at conferences, courses, online discussions and in a number of reports. However, I've seen less discussion or guidance to describe the different types of monitoring and evaluation technology, how they fit into an overall M&E system, and how they relate to the different needs of an organisation.Read More