Added Value Surveys

We believe the best way to ensure efficiency and effectiveness is to undertake regular monitoring, evaluation, reflection on impact and learning.  To do this complex programmes require a mix of methods.  In this page we share some of our tools and processes.  In particular we are now very focused on Championing 'Difference in Differences' as a cost effective yet rigorous and robust evaluative method.

Monitoring, Evaluation, Impact and Learning (MEIL)

As consultants we are of course asked to conduct evaluations.  A few examples are here.

 

However our special interest lies in five elements of the evaluation process that we think have considerable potential.

 

The first is Information, Knowledge, Attitude, Practice and Behaviour (IKAPB) studies. 

KAP studies have been around a long time and our extra insight is this.  We have been using the Theory of Planned Behaviour (TOPB)  to design and construct the KAP.  This well established theory has been used in other sectors for a long time, but we think we might have been one of the first to use it in a complex development programme.  We can try and explain it here.

What the TOPB does is enable the development intervention to direct its efforts to the right place, to create messages that work and change behaviour.  In our first use of it back in 2003 we found that after analysis and working with NGOs across Northern Ghana the impact was dramatic – with changes in behaviour on uptake of improved stoves and on managing wild wood collection.

The second is a derivative of that – the use of non parametric data within the study

TOPB suggests that focus groups are held to collect the qualitative data on the subject.  People make statements.  In TOPB terms people make ‘outcome belief’ statements but this next part would be true even without running a full TOPB.    In a normal focus group it is often difficult to tell if the statements made are the opinion of one person and the others are too shy to contradict, or it genuinely is how the group feels.   Testing these statements across a wider household survey is our insight.  It turns qualitative statements into non parametric variables and when run across a wider sample it is possible to see if the sample as a whole subscribe to the statement or whether ti was a one off idea in a focus group.

Our third element is the use of statistical analysis software to look at the linkages across the surveys.

In the theory of Planned Behaviour, we can then use a model and non-parametric statistics to line up the statements and see if they are barriers or drivers towards our desired behaviour.  This lead to our third element – the use of statistical analysis software to look at the linkages across the surveys.  Knowing that 20% of children have diarrhoea is not that useful.  What is useful would be to know which 20% of children and what is the characteristics of their household.  By linking the descriptors, knowledge, attitudes, practice with the outcomes (relevant to the intended intervention and such as children diarrhoea), a more nuance picture emerges and  the design of the intervention becomes more focused.  We really feel that many agencies are spending a lot of money on KAP surveys and not analysing them in sufficient depth.  The marginal extra cost of the extra analysis is small in terms of the added value.

Our fourth element goes one step further and proposes ‘Difference in Difference’ analysis.

Recently we have been able to work with NGOs to take the analysis a step further.  ‘Difference in Difference’ (DID) studies have been around a while, but few people realise the cost effectiveness of this quasi experimental process.  In the current climate where value for money in development is being scrutinised, and an increasingly professional cadre of workers want to know what works and what doesn’t, DID studies can provide a significant contribution.  They are not randomised controlled trials  (RCT)s  – which tend to be very expensive, and difficult to ethically apply in complex development interventions.  DID studies offer a cost effective insight (built on household studies that are often undertaken and under analysed) which is robust enough to provide evidence of what works and what doesn’t.

Fifth – collection via Information and Communication Technology

Finally when we consider cost effectiveness of evaluative surveys, we have recently trialled Android based Tablets for the collection of Household data.  This has proven very cost effective.  The use of technology for collection of data has been a subject of discussion in the development sector for a while.  In other web pages we have discussed our involvement with the Real Time Monitoring for the Most Vulnerable project, and our use of text messaging for a wide and broad collection of governance and disaster mitigation activities from 36,000 poor and vulnerable individuals and grassroot organisations.  However here we wish to highlight our use of tablets in the collection of HIV data in Malawi.  Our analysis showed that for the cost of photocopying 800 paper questionaires we could purchase 10 Android based Tablets.  One of the weaknesses of surveys has been the digitisation of data – paper responses are typed into the computer by bored subordinates who often make mistakes.  We often took two weeks just to clean the data.  With the tablets, the data is instantly available and with skip codes and greyed out questions, duplicate and confusing answers are not possible.  A respondent can no longer be at primary school and university simulateously!  Cleaning and checking the data is considerably easier.  And we found that even high school graduates who had never handled a computer could easily pick up how to handle a tablet – training was easy.

As you might tell from the text we are enthusiastic about the use of tablets for data collection.  Not from a geek point of view, the use of technology, but because it offers a mechanism to undertake cost effective studies – studies that will give development agencies clear evidence of the context and progress within their development interventions.  And this will lead to better development, stronger livelihoods, less poverty.



Institute of Development Studies
Friday, 14 June 2013 12:56
Print E-mail
Added Value Surveys

Simon was seconded to Institute of Development Studies as Impact and Learning Team (ILT) Manager.

In 2010 Simon was asked to help create the Impact and Learning Team at IDS.  Working part time he established the team, undertook key research on the policy and practice enabling environment, and helped Management and the Board of Directors of IDS to assess the impact of IDS itself

 
Difference in Differences
Tuesday, 28 February 2012 15:48
Print E-mail
Added Value Surveys

Difference in differences (DID) is a quasi-experimental technique used in econometrics that measures the effect of a treatment at a given period in time. It is often used to measure the change induced by a particular treatment or event, though may be subject to certain biases (mean reversion bias, etc.).

 
Monitoring, Evaluation, Impact and Learning
Tuesday, 28 February 2012 15:17
Print E-mail
Added Value Surveys

Monitoring, Evaluation, Impact and Learning

Gamos takes a mixed method approach to Monitoring, Evaluation, Impact and Learning.  We believe that MEIL has a mixed purposes and audience, ranging from formative learning to accountability both upwards to donors, and downwards to beneficiaries.  It is inevitable that no one approach can address all the needs of the different audiences.