I have seen a lot of ICT4D innovations struggle because the initiatives try to innovate on too many things at the same time: a) a social or managerial or accountability, b) a social-organizational or social-logistical aspect and c) the technology aspect.
I can think of almost none that succeeded at all three; even two is rare. It may be different in technologies that are aimed directly at consumers and have inherent appeal, such as the telephone. (And even there, people took a long time to learn that the telephone could be used to just chat, not only to exchange specific, quick information.)
For one thing, it seems to me that trying to do too many things is too hard. For another, if it starts succeeding, or failing, it is hard to know what you were controlling for, and therefore what exactly succeeded. That’s the first principle not only of science, but with tinkering with any complex machinery such as a car engine.
It seems to me that in bringing technology innovations to development, most projects typically do more than one thing, and some do as many as four things:
- Innovate the “surface” technology: for data capture, for storage, for transmission, for analytics, and at the same time the “deep” or background technology that handles the analytics and systems behind any interface.
- Innovate a managerial use for data: that is, introduce a data usage, for some managerial or policy purposes, for which there was not an articulated, consensus need to begin with (cameras to measure teachers’ presence, not necessarily something teachers were begging for).
- Invent a new indicator or fundamental measure: such as a way of assessing whether children can read (e.g., EGRA, the Early Grade Reading Assessment – a way of assessing whether children are mastering fundamental reading or pre-reading skills), or a more rapid way to test for malaria.
- Introduce a basic social construct: e.g., the notion that some powerful actors should be “accountable” to others less powerful, e.g., politicians accountable to taxpayers. I don’t mean here paying lip service to accountability – that is commonplace – I mean absorbing it in the same way in which (probably a legend), the Minister of Finance of Norway had two inkwells on his desk, one for official business, and one for things like writing a note to his spouse that he might be delayed on a particular day.
I have seen many attempts to do several or even all four of those at the same time. Most seem to fall from their own weight. I can reflect on some of our ICT for Development innovation successes at RTI and it seems to me that we succeeded best when (mostly) we respected the “change only one thing” rule.
Tangerine Example
Take Tangerine®. This is a tablet-based way to record and report on children’s results the Early Grade Reading Assessment (EGRA) which we ourselves had developed. We could have introduced both the fundamental measurement technique (EGRA), and the electronic (vs paper) means of recording (Tangerine) together. That we did not do so was perhaps more of a happy accident than a plan.
But the point is that EGRA became fairly well-established in terms of its technical content (what to measure, why), its logistics (field work, supervisory traditions), and community of practice (many other NGOs were already using it), when the tablet-based technology was developed to enable an easier way to deal with EGRA.
At that point people were very clear on why EGRA was useful, so we did not have to sell them on both EGRA and Tangerine at the same time. Many came to us saying “why don’t you give it a technology base?” Later on, of course, both were often offered as a package, but by then we knew that that’s what people wanted, and we had the confidence that they worked well together.
MEEDS Example
Another example of introducing minimal new tech is the Malaria Early Epidemic Detection System (MEEDS). We launched it into public healthcare facilities in 2008. It uses USSD, a capability that exists in every mobile network operator’s infrastructure. It with works any mobile phone, so clinicians could use their existing phones at no additional cost to themselves. And they knew how to use these phones already. It was easy for them to enter the data for weekly reporting.
Adoption was quick. In 2012 we added reporting of individual malaria case reporting. This built on what they already had and knew. Again, adoption was quick. In these cases, the notion of case reporting was already used in the health care system and even the technology was known. The social class of individuals using the technology already had it. The innovation was a relative tweak.
MyTax Example
A counter example, also from our own experience, is useful. Recently we tried to innovate with MyTax, a system for Ugandan municipalities to communicate with local taxpayers via dumb phones, with regard to what they owed, and what was being done with their funding. (Mutual accountability.) In the event, too many things were introduced at the same time.
MyTax could not properly hook to an already existing, well-organized, and electronic tax management or even finance management system. The innovation of accountability for taxes may also need to develop. Thus, the innovation of MyTax required going back and re-engineering some of the basics of an efficient public financial management system as well as building a culture of accountability.
It would have been easier to introduce a simple interface (dumb phone) if all the other things were in place.
This does not mean MyTax won’t succeed, but it may take longer than other innovations, and more headache and cost than was originally planned. In essence, what was initially viewed as the introduction of a relatively simple innovation resulted in having to develop an entire model of financial and revenue management that included a heavy dose of change management.
In this case, ICT was relegated to a secondary role since we had to ensure all basic stepping stones were in place to ensure the conditions for the technology were in place. And a bit of an ongoing supply push. Whereas the other two technologies noted here seemed to take a life of their own after initial introduction.
Lessons Learned
In a few cases where we have tried phone or tablet-enabled reporting to replace (or at least shadow in the experimental stage) operational processes, we’ve found out that good old pencil and paper is just better, or at least no worse, and requires no donor intervention, no provision of equipment.
Basic lesson: change one thing at a time. Don’t use a bureaucracy (a donor project) to try to create an accountability revolution and develop the metrics to track it and develop the back-office systems to do the analysis, and the cell phone that is the supposedly simple interface to all that other stuff that needs to be developed too.
By Luis Crouch, CTO and VP at RTI International
While I’ve seen the sort of ‘madness’ the title alludes to,and I think Luis hit the nail spot on in the article: However I would reword the blanket conclusion in the title. Changing only one thing at a time yields high probability of getting trapped in incrementalism. 5% improvements, efficient inefficiency, a faster horse cart and all that.
My favorite example is when a lot of the first electronic disease reporting systems got popular, ministries wanted the bits and bytes to follow the trail of papers – from community to clinic to district to province to national. Most projects that pandered to that, instead of taking time in education and capacity building, ended up with massive, burdensome, connection and reliability and support costs imposed by the hierarchical transmission that eclipsed many of the benefits of electronic reporting. They had left aside the concept that what people desired was hierarchical “approvals” and review, not a physical network topology. These bad topologies confounded two experiments- one in digital data collection; and one in power and accountability dynamics. Once good technical design happened and field supervisors saw that data physically going to a cloud server still gave them ‘review and release’ control; the innovation in digitization could continue.
I think the ‘madness’ examples we’ve seen suffered not from trying to change many dimensions simultaneously, but from assuming those changes were all desirable+feasible without enough test, and not understanding their own innovation goal e.g. spreading out android phones to do the same old paper forms but now in technicolor, is different than installing an app on a phone is enough to change a societal power dynamic like Luis very well describes. We’ve all seen the skeletons of those “cargo cult innovation” projects; that tried changing all at once, with no feedback loop.
There is also a perverse allure in the funding systems of humanitarian and development sectors to masquerade incrementalism as innovation, that many working in fundraising have felt: Sometimes it tends to more socially acceptable to fail in the traditional ways, than to succeed in innovative ways.
I think the experiences Luis describes, and that many of us share, may point to an alternate way to word it: Successful innovation happens with designers and program leads with enough insight and experience to understand *what* the innovation is about, at it’s core, and focus on incrementally testing that hypothesis. This means focusing on the end, versus the means; and avoid adding noise in those experiments.
Testing the innovation hypothesis may (typically will) require “changing more than one thing at a time” … and savvy field folks are those that will change only those things but leave all satellite issues alone. They will change more than one thing at a time, in the simplest way possible, no more, but no less, and with a finger on the pulse of the project to get a good feedback loop going. And they do that despite perverse incentives of the grant systems.
In my experience successful innovation projects seem to embrace a flavor of the Saint-Exupéry quote- “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
Thanks for the extremely thoughtful reply. I wish I could put these things so well myself. I agree with pretty much everything you say and particularly like the formulation: “Successful innovation happens with designers and program leads with enough insight and experience to understand *what* the innovation is about, at it’s core, and focus on incrementally testing that hypothesis. This means focusing on the end, versus the means; and avoid adding noise in those experiments.”