_______
LEARNING BY DOING

> Tracking Practice

MONITORING

Monitoring at Twaweza aims at enhancing our understanding of what works under which conditions, and at being transparent and accountable. This includes the design of mechanisms (such as feedback loops) which give us practical data for programmatic decisions. Monitoring also seeks to be collaborative both within Twaweza and with implementing partners; while the core function of monitoring is not to audit but to enhance learning, monitoring does include clear measures of accountability (to ourselves, as well as to donors) and value for money.

WHY DO WE MONITOR?

Twaweza’s internal monitoring aims at documenting what we do and why, and following up on what works, and what enables learning and informed decision-making.

We control an initiative up to a point (say, a production point), but then we release it into the real world. In order to understand what actually happens “out there,” we monitor. It helps us to be accountable, but moreover, we are curious as to who is the initiative reaching? In what volume? What do the people think of it?… This applies equally to materials we produce as to broadcast; it also applies to engagement inputs and strategies. It is relevant for single products (e.g. just radio shows), as well as a package of products that go together (e.g. radio shows, together with a televised interactive campaign, together with print material, together with liaising and engaging with major stakeholders in government).

Monitoring at Twaweza aims at enhancing our understanding of what works under which conditions and at being transparent and accountable. Monitoring is an important part of our learning loops and closely linked to learning, communication and evaluation. Exploring new ways of working bears the need to document what we do and follow up. Monitoring generates information that allows for learning, how to do things better and make informed decisions about the next initiative. Monitoring is also crucial for informing evaluation and for our communication with partners, public and donors. Both internal monitoring and external evaluation are closely linked to Twaweza’s outcome indicators as articulated in the strategy [link].

WHAT DO WE MONITOR?

We track:

Delivery/distribution: where was the product sent, how far in the pipeline did it get, and at what volume?

Coverage: of all the potential users that could have received it, what is the proportion that actually did?

Quality: what do we think of the quality? What do the end-users think of the quality? How about experts, do they have anything to say?

Feedback from users: In addition to perceived quality, what other feedback do we get? For example, is the product new, interesting, useful? If it’s useful, then how? Has it been used?

Monitoring gives us considerable insight. But it stops short of asking (and answering) the ultimate question: did the initiative contribute to change, and what kind of change? This is the question that is asked by evaluation.

HOW DO WE MONITOR?

There are three steps that comprise Monitoring at Twaweza (see diagram), each with several components. This includes a range of different activities, mostly undertaken by the LME unit and acting as independent verification and learning exercises on the core Twaweza activities.

Tier 1:

The production of outputs is monitored through the units overseeing the relevant contracts, including ensuring that outputs conform to internal quality standards (e.g., printing quality, sampling standards, etc.). These internal standards are developed and updated by units with the relevant specialization. The distribution of the outputs can be monitored by the contracting party (self-reports), although an independent check is usually required as well. This check can be performed by the overseeing Twaweza unit, or by LME; it is usually quantitative in nature. The main task of the LME unit is to provide a structure for developing Tier 1 monitoring plans, guidance on standards and tools, assistance in carrying out independent checks and promoting the use of data and evidence to inform implementation decisions.

Tier 2:

Feedback loops

Twaweza’s interventions and initiatives will have the greatest chance of success if they are subjected to repeated testing, tweaking, adapting. An essential component to this is setting up feedback loops. These are tailor-made, small-scale measurement exercises with the following characteristics:

  • They reach out to the target audience and measure the perceived quality, relevance, and usefulness of the intervention/initiative
  • The data collected through them is actionable – something that the organization can change through the implementation
  • Data is relatively easy to collect, easy to analyse – with either only internal, or limited external capacity needed
  • They have a very quick turnaround from data collection to application, from findings to learning

​Coverage

The most innovative ideas and greatest initiatives will not result in any significant effects if they are not implemented widely enough among the selected audience/population. Twaweza endeavours, wherever possible, to measure coverage (or reach) of its initiatives. Mechanisms for this can vary, depending on the type of initiative that is implemented and can range from a nationally representative survey to a tailor-made exercise following just a selected sample of individuals. In addition, we also measure how much coverage Twaweza gets in the media and the quality of the coverage. This is one of the ways in which we can gauge whether we are influencing the tone and nature of national dialogue and debates, as presented through the media.

Expert assessment

For some of our outputs an important measurement and validation exercise is the opinion or assessment by experts in a given field. These exercises are complementary to the feedback from the target audience; the purpose is to obtain an objective view of the quality and relevance of a product. Twaweza conducts a limited set of such exercises, and each is tailor-made for the particular output.

Tier 3:

Assessing the link between outputs and intermediate outcomes is an essential component of Twaweza’s internal measurement structure. This area of work, sometimes referred to as process evaluation, is conceptually and logistically demanding, as it goes beyond tracking the actual outputs, and into measuring the possible effects these outputs can have on the intended target audiences. In many cases, we envision using mechanisms useful for Tier 2 Feedback loops to also assess feedback related to intermediate outcomes. Similar to Tier 2, Twaweza’s LME unit will establish shared core concepts and guidance on how to think about and plan for assessing intermediate outcomes and a set of standards, mechanisms and tools (including feedback loops) tools for implementation.

We want to make it easy to follow the results chain from Twaweza via partners to changes reflected in media and trace changes backwards, to see links and synergies, or to search by sector, network or goal. We are aiming to develop a web interface that will communicate monitoring information and allow the public, external evaluators, and our partners and donors to input and access data, generate reports, thereby facilitating others to combine different sources, create new knowledge and share lessons.