No one ever fails in ICT4D. Isn’t that amazing! Technologies come and go quickly – bye, bye PDA’s, Windows Vista, and soon Nokia – yet in ICT4D, each project has impact and we never fail. We just have lessons learned. In fact, can you name a single technology program that has publicly stated that it failed?
This is Oscar Night Syndrome, the need to always look good, and ICT4D is deep in denial with it. At the Best Practices in Measurement and Evaluation Technology Salon we dove into the need for monitoring and evaluation in ICT4D and the tools that can help us do that. What did we find?
ICT4D does not have an M&E culture
Now ICT projects do not exist in a vacuum. Many funders have indicators they expect a project to impact, and they often require some level of M&E. But often this evaluation is an after thought at best, where inputs (number of trainings) and outputs (number of trained people) are counted but there isn’t any qualitative analysis (how did the attendees mindset change after the training).
Add to this the need to show results to the donor, their minimum tolerance for failure or anything else that could be seen as waste, and the current climate of “accountability” in political circles, and be it the foolish organization that doesn’t turn in a shiny result complete with great storylines and images.
Just think about all the lessons (re) learned in every project, listed deep in a report, while the picture of a woman smiling with a mobile phone is on the cover and everything is rosy in the press release.
Wanna get invited to the next Salon? Join the announcements list!
How can we change that?
Our great focus at the Salon was how to change the current M&E climate in ICT4D. How to better monitor, measure, and evaluate the projects we work on to improve our outcomes and our profession. We identified four areas where could improve M&E in ICT4D.
In health, randomized control trials (RCT), are used extensively for impact evaluation. Technically called “experiments” RCT’s have a few limitations – they are expensive, take a while, and can only test one hypothesis. A better option for the developing world context, and with ICT especially, are “quasi-experiments”.
Quasi-experiments are exactly like experiments (or RCTs) but without random assignment to control groups – it’s almost the same but more feasible and possibly more ethical. Quasi-experiments can also incorporate the rapid change in technology ecosystems.
Regardless of the experimentation level, there is no excuse for us not continuously measuring outcomes – now and for years after the project ends. How else can we really know the impact of our work unless we track it beyond the 1-3 year grant cycle?
2. Qualitative Analysis
Everyone loves numbers, yet often the best results are qualitative – changes in beneficiary perceptions that cannot be defined by numerics. How can we bring these tangible yet “fuzzy” results into ICT4D M&E? In person interviews, observations, focus groups, and the like performed in country are the best. Qualitative results can also be used in the formative stages of project design to guide future actions and form the basis of the statistical quantitative monitoring.
One way to cheaply collect direct qualitative results is to monitor social networks like Twitter and Facebook to see what your beneficiaries are saying about the project. Just be sure that you remember user bias. The users of Facebook and Twitter tend to be the elite in the developing world. Nothing can replace the face-to-face.
3. Common Standards
In developing this Salon, I thought M&E stood for “measurement and evaluation” when it actually is “monitoring and evaluation” which is just one example of the need for a common language for M&E. From there, we can dive deep into different measurements that ICT affords – from click rates or retweets – yet we need to remember that we should be targeting the non-technology audience and they should understand our terms.
Even better than common language would be a common ICT4D M&E framework. Something along the lines of NPOKI, a health-centric performance management system shared among different health organizations. This multi-organizational M&E framework allows for an apples-to-apples comparison of project effectiveness that transcends specific projects or even organizations.
4. Implementation Evaluations
Yes, your project may have great outcomes, but was your implementation of that project the best it could be? What about measuring ICT implementations – the very act of deploying a project? We are missing out on great opportunities to learn how we can do our jobs better and improve the ICT4D profession as a whole by not engaging in implementation evaluations, be they formal reviews or at least internal reviews. I know I would like to know how I compare with my peers in ICT deployment. Am I faster, better, cheaper, or do I just talk a good game?
World Vision has a company-wide programme management information system that tracks common indicators in both project delivery and outcomes, helping the organization pinpoint good practices and effective programming. Nethope is also investigating a consortium-wide M&E systems to help organizations better allocate internal resources.
Creating Space for Failure
While these are four tools we can use to build an M&E culture, we must change the mindset of ICT4D practitioners if we expect any of these tools to really be used. One way to do that is to have regular meetings where we can talk about what works and doesn’t – which is the Technology Salon. Another way is a Fail Faire – a positive celebration of failure.
So coming this fall will be a second Fail Faire in Washington DC, building on last year’s event and other internal Faires. If you wanna be one of the cool kids who helps organize it, be sure to email me today!
Together we can change this Oscar Night Syndrome and create a real monitoring and evaluation culture in the information and communication technologies for development community.
I am exiting the mainstream media for development projects because of exactly the same problem. The project is always a success before it has started and once set up many of the projects have impossible targets based on success factors developed in other (usually donor) countries. I’ve run a similar sort of invitation only faire called “To Be Continued”. This is where we discussed what we would do if the project were to start from scratch in another country or another area. What lessons learned are essential to improve the level of success. I’m near Amsterdam, The Netherlands, home of some huge successes and failures. Would be happy to share with those interested.
Hi Wayan,
Indeed a plan-do-check-act approach is much needed in many projects, not only in the ICT4D field.
Looking at the impact assement side: the SED (Heeks & Molla) have compiled this set of approaches for impact assessment of ICT4D projects. Location: http://www.sed.manchester.ac.uk/idpm/research/publications/wp/di/documents/di_wp36.pdf
Regards,
Anand
Glad it’s been brought up!
There could be many systemic reasons for this
– The separation of the funder and the ‘customer’ of most ICT4D projects
– The short-term engagement most organizations work with – while development and social change takes years (I’ve seen too much hinge on summer internships)
– The stakeholders of projects respond to their superiors; and they may need to do the ‘Oscar Night’ show internally inside their NGO/Ministry; in groups that don’t value risk
– Techies confusing building technology with implementing a social program (I see this similar to the confusion in the commercial sector when people don’t understand the difference between building technology and building a product).
– Lack of a culture of agility leading to people preferring to follow plans over doing the right thing for the beneficiary even when the difference becomes clear.
– Loosing opportunities to bake the ‘M&E’ of a system into the system itself, or onto its data exhaust.
– Naive assumptions on behalf of techies that tech will help, naive assumptions on behalf of luddites that it won’t.
Seeing this at InSTEDD we not only have full-time research & evaluation folks (R&E – yup, another acronym!), to help design the R,M&E processes for those we work with, but even put into our contract that we will hold retrospectives for the projects we participate in, to close the learning loop. ICT4D, mHealth, etc are all so young that pretending the answers are known damage everyone and strengthen the forces above. Risk is an asset to be carefully invested. Failfares are awesome. Peace.
This is a great post, Wayan, was sorry to miss the event.
I can’t help wondering what makes “ICT4D” projects different from “development” projects, and why we’d need a different approach to or emphasis on monitoring and evaluation.
As we discuss so many times, tech is a tool, not an end in itself. ICT4D projects (should) use tech as a way to achieve a development outcome more efficiently or at greater scale. In that sense, we should be evaluating those outcomes just like we would on any project, and there are plenty of good systems established for that. No need to reinvent any wheels. You list above some approaches, and they’re valid for programs with ICTs just like programs without, or without ICT as a focus.
As we get into the second decade of the 21st century, I find myself wondering whether “ICT4D” is even a useful term any more. *Any* development project should make use of the most sensible tools for implementing a goal. If you’re trying to reduce malaria cases, for example, of course you’ll want to track data with mobile survey tools, because using clipboards and paper is a huge waste of time and resources. If you’re trying to improve youth skills, of course you’ll want to include computer training, because basic tech skills are central to employment in the 21st century. If you’re a local government, of course you’ll want to use text messaging to get out announcements and information, because most of your constituents have mobile phones and that way to reach them is cheaper and more effective than driving from village to village with a megaphone.
So I’m not sure why we’re separating out “tech” project from those without tech any more. It pigeonholes tech as a niche interest, when actually we’re living in a world saturated by tech, even in its most remote reaches. There’s no separation in the real world any more, yet somehow the development field sees “ICT4D” as its own field.