In early 2017, USAID’s StopPalu malaria control project in Guinea started implementing an SMS program to motivate pregnant women to seek antenatal care visits and receive intermittent preventive treatment for malaria. The project integrated an operations research protocol into the SMS program design from the very beginning. Specifically, we did the following:
- Set up an intervention SMS group and a non-SMS comparison group
- Gathered detailed programmatic costs
- Set up feedback loops to ensure continuous program learning and adapting
As of September 2017, 4 months into the study and with 8 months to go, we had over 700 women enrolled in the intervention SMS group and over 500 women in the comparison group. Among preliminary results, we are seeing that the SMS intervention group has 8% more follow-up antenatal care visits than the comparison group.
Three SMS Program Learnings That Will Surprise You
Yet who cares, right? We are all doing SMS interventions, and have been for years. Why is our activity worthy of your attention? Here are three reasons that I think our project is different from the rest.
1. We Know the Cost Per Outcome
While it was encouraging to seed good preliminary results, we asked “At what cost?” To answer this question, we examined total costs, unit costs and trends in unit cost over time.
The total cost of starting up and implementing the SMS program was $15,000 over one year ($10,000 for startup and $5,000 for maintenance for a full year). When we examined unit costs over time, we saw some interesting results.
In the first month of the program, the cost per woman enrolled was $20.50. By the end of one year, per current trends, this unit cost is expected to reduce to $2.95, largely due to efficiencies of scale.
Given that the SMS group has 8% more antenatal care visits than the comparison group, we were able to calculate that the SMS group will see 23 fewer neonatal deaths over a one-year period than the comparison group.
- It will cost the program $15,000/23 = $650 per neonatal death averted.
Our key takeaway is that it is important for programs that incorporate technology to focus on technology costs: total as well as unit costs, and how costs change over time.
2. We Didn’t Do RCT – On Purpose!
There is currently a lot of attention on using RCTs to measure impact of programs in international development, but we made a conscious choice not to fall into the RCT trap. Instead we opted for a quasi-experimental study design for two key reasons:
- We know SMS works. There is plenty of evidence in the scientific literature that mobile technologies work, i.e. they lead to improved programmatic outcomes. Our goal was not to prove that SMS programs work in general, but rather to understand how this SMS program works -or does not work- in this particular context in Guinea.
- We wanted to continuously learn. We wanted to continuously adapt the SMS program in real-time in order to improve it, and have the best outcomes possible. Continuous programmatic refinement or tweaking does not lend itself well to an RCT model, which requires fixed interventions that elevate academic research over actual results.
Our key takeaway is that while RCTs are the gold standard, they might not be the gold standard for your particular context. Choose a study design that meets the needs of your program.
3. We Made Data-Driven Changes Early & Often
Through feedback loops and reflection sessions that brought together SMS program and M&E staff, we have already been able to make necessary course correction and refinements to the SMS program.
For example, one finding after analyzing data from the first 4 months of the program was that women with no or little education and those enrolling in the SMS program early on in their pregnancy (1-4 weeks into their first trimester) were less likely to seek their follow-up antenatal care visit.
Based on this learning, the SMS program is now developing targeted messages for women with no or little education and for women who are enrolling early on in pregnancy to increase adherence to follow-up visit schedules.
- We changed core aspects of our program, based on data.
Our key takeaway is that it is desirable – and even appropriate – to course correct and refine technology implementation based on continuous learning and adapting
By Rajeev Colaço, Senior Manager, MERLA, RTI International
Sorry, the comment form is closed at this time.