This is the second in a series of blog posts related to community scorecards. These are based on learning from our work with the Citizen Engagement Programme, CESC, N'weti and other partners in Mozambique using this tool. See here for the first post on an introduction to community scorecards. This post describes the process of building an M&E system to help manage and monitor data from the community scorecard process.
Implementing community scorecards in Mozambique
The Citizen Engagement Programme (CEP) supports citizens to monitor the quality of health and education service delivery in four of Mozambique’s eleven provinces, and to advocate at district, provincial and national levels to improve the quality of those services. The programme facilitates dialogue between citizens / communities and service providers to improve the quality of services. Kwantu is one of the consortium partners implementing the programme.
CEP plans to work at scale across 773 health facilities and schools in Mozambique. The programme started working last year with local partners to run community score card (CSC) processes in many of these clinics, hospitals and schools. This is a participatory, citizen led methodology for monitoring the quality of service delivery.
Logistical and data quality challenges
Implementing over 700 CSC processes presents significant logistical challenges for the partners and teams managing implementation, monitoring and evaluation and internal learning and documentation. The teams need qualitative and quantitative data from the implementing partners at different stages of the process. Collecting, collating and processing the data for these three requirements using Excel or Word templates would overwhelm the partners and the CEP team as the programme scales.
Working at this scale often introduces data quality challenges too. Key considerations were:
- Ensuring the same operational definitions were shared across all sites in 10 districts in 4 provinces
- Clarifying responsibility for data collection and data review on each site
- Standardising data collection forms across all sites and providing clear guidance
- Agreeing on a consistent approach for aggregating data to report against indicators
- Retaining a clear audit trail to all source forms to help with data quality checks and evaluation
Anticipating these challenges, we spent time during the programme inception period to document the CEP CSC process and implement a system that could help manage and monitor this process.
Documenting community scorecards as a process
The CEP approach to CSC was developed by the consortium partners, many of whome (CESC, N'weti and IDS) already had considerable experience of using this tool. Once the approach was refined and adapted we ran a workshop with CEP programme staff to:
- Review the CSC methodology and agree on ways in which a standard version could be implemented for the CEP context
- Identify a series of stages to guide the implementation process and agree on points where review or sign-off is needed
- Document the data requirements of the monitoring and evaluation and programme teams and ensure that data needed to measure indicators is included on a data collection form
- Map data requirements to a set of simple forms that can be completed by the implementing partners at the appropriate stage of implementation
- Map the forms to the stages in a standardised workflows required for data collection
- Define reports that can export this data, aggregating across all sites
We then produced paper versions of the forms in Portuguese for field-testing with partners.
Following is an overview of the standard stages identified for the implementation of the CSC process in CEP. Each stage provides clear guidance to the implementing partner on what activities they need to carry out and which data they need to collect.
Implementing a BetterData app to manage and monitor the process
The configuration tools in BetterData enabled us to implement the monitoring and evaluation system without involving software developers. This was important, as feedback during the rollout process identified a number of areas where small changes to the forms or workflow were needed. We expect this to continue as the programme reaches scale.
To configure the Community Scorecard App we used the following BetterData components:
We created each of the following forms identified in the documentation process:
- Registration form (used to register a new facility)
- Group form (used to register a community group taking part in the scorecard process for a facility)
- Evidence form (used to register the scorecard completed by each group)
- Action plan form (used to register the joint action plan agreed by the community and provider groups)
- Monthly progress form (used to record progress in resolving actions)
- Meeting form (used to record engagement meetings)
- Outcome form (used to track planned or un-planned outcomes resulting from the process)
- Contact form (used to register key stakeholders)
We defined taxonomies (standardised list of terms) to help cross-reference data. For example, we have a taxonomy of districts that CEP operates in, another of types of groups that are recruited in the CSC process. These ensure that each site cross-references information in the same way. It also makes it easy to do cross-cutting analysis.
We can configured workflow to match each of the stages documented above. This helps to guide staff through the implementation process and ensure that facilitators on each site complete the same activities, using the same forms at each stage of the implementation. The workflow links to system notifications that prompt users when a new stage has started and what specific actions they need to take. It also includes sign-off stages, where someone else must review the data entered. Building data quality checks in at the operational level is a powerful way to address problems when they are still easy to fix.
The reports component makes it easy to see the full data model of each form. These can be mapped to column headings in a report. Since the Scorecard app includes a workflow, we can also include the workflow stage in the reports.
We used this to create reports that show the workflow status of each site in relation to the target dates agreed for that site. This is an important management tool to track the implementation of all sites and see which are behind schedule.
Other reports track issues raised by groups in their scorecards and then those agreed across all groups in the action plans. This makes it easy to analyse which issues are being raised across several sites and which issues are raised by groups, but not included on action plans.
Since the full data model for all forms is accessible, more reports can be configured at any time as new questions arise. Once configured, reports can be exported as XLS at any time.
Finally we configured the indicators used in the logframe and linked these to the forms that collect this data. This ensures that the data used to monitor each indicator can be tracked back to the source.
All staff and partners can browse a list of scorecard profiles on BetterData. Each profile has pages to show the workflow status, the forms submitted and the people working on that site. This makes it easy for staff working on different sites to share information and track their progress.
Is it feasible to share this app with other partners?
The Community Scorecard app has been configured and field tested by CEP. As the programme scales it will no doubt raise new issues that need to be incorporated into the app. This will mean making further changes to the forms, workflow and reports as CEP grows.
However, both CESC and N'weti (CEP partners) already use community scorecards in their work. As do several other NGOs in Mozambique. How feasible is it to share an app designed to manage such a specific process with other partners? What areas will they need to modify in order to adapt it to their context?
- Save time and money - Partners can use a pre-configured app without needing to go through a time-consuming process of defining each data collection form. They use forms and a workflow that have been tried and tested.
- Share and aggregate data - Partners can (should they choose to do so) share data with each other and aggregate it to provide a higher level picture. In this case the aggregated data is at the activity level, not just the indicator level.
- More powerful advocacy - Advocacy asks can be informed by a much larger dataset
- Value for money - Partners can benchmark metrics that help them understand where they can increase efficiency and effectiveness.
The challenges are also complex. Will one organisation agree to share it's methodology with others (as CEP has agreed to do)? Will there be enough consistency in the data collection forms and workflow across partners to make this viable? Are there contextual issues that require the process to be significantly adapted? Or do different organisations implementing the 'same' CSC tool actually mean very different things in practice?
We'll have better answers to these and other questions as we explore this approach with community scorecards and other tools like it. Meanwhile if this approach looks relevant and useful in your work then get in touch.