In early 2015, I had a brilliant idea. I thought we could use sensors to have an impact on hypertension, which is a leading cause of heart attacks, stroke, sepsis, diabetes, and kidney failure. As with many brilliant ideas, it was a failure when put into practice.
A Sensor Study Failure
Hypertension is becoming more prevalent in emerging middle income countries like Ghana, driven by the growth in sedentary office jobs that pay well enough for the employees to consume high-fat, high-salt foods on a regular basis.
One way to combat hypertension is through regular exercise, and what better way to understand how much one exercise (or not) than by using a wearable activity tracker, like a FitBit, or in our case, a Garmin Vivofit?
In late 2015, I was able to convince my colleges that we should design a research study that looked at how wearable activity trackers could effect hypertension in Ghana. By early 2016, we had won an innovation grant to explore this idea. That was the last celebration I had with my team.
While they felt the ensuing experiment a success, an rightly so, as it accomplished their main goal of producing a peer-reviewed paper, I found it to be an utter failure for three key reasons.
Share your story of failure at Fail Festival DC 2017 on December 7th.
1. I failed to accept my team’s version of success
From the start, my team’s goal with this effort was to produce a research study that was sufficiently rigorous to be published in a peer-reviewed journal. However, I am not a fan of academic publications. So I chafed at what I considered a errant version of success.
I wanted more than a publication, I wanted impact. I wanted to test, fail, and iterate fast to explore the bleeding edge of sensors’ impact on health.
Our divergent views on success was evident from the start, and rather than letting the issue fester, I should’ve accepted the team’s version of success instead of complaining they were not achieving my expectations. They were gracious about my difference of opinion, but I certainly could’ve done better.
2. I failed to understand the IRB process
When one does a research study with human subjects, the research protocol must go trough an institutional review board (IRB) – a committee that reviews the methods proposed for research to ensure that they are ethical. For our work in Ghana, we needed to pass an internal IRB and one in Ghana.
Where I thought we could quickly iterate on our efforts, and adapt our study to the participant reactions we went along, my colleagues pointed out that every single change to our methodology had to be reviewed by both IRBs, that each met once a quarter.
Now I fully understand the need for an IRB – we’ve all worked on a project we wished had better ethics – but the time delay of double review killed my dream of a dynamic intervention, and caused me great frustration. We theoretically could test one hypothesis per quarter, but for all practical purposes, we were only able to test one hypothesis per year.
Rather than throwing up my hands in frustration, I should’ve accepted the IRB process and worked with it and my team to build in the flexibility we needed to experiment, or found ways to extend our time on the project to account for the iteration delays.
3. I failed to support my team
When we started the project, back when I had grand dreams that were well beyond the ensuing reality, I expected that I would be a core team member and contribute to the project on a weekly basis.
Once we got going, and I realized both my realistic availability for this project vs. the needs of my other projects, and how this project wasn’t going to be the dream that I wanted, I stepped back from day-to-day efforts. In this, I failed my team.
We all can claim to be “busy”, but what we really mean is that we are prioritizing other things over whatever we claim to be too busy to do. I did that to this project, and I feel guilty about it to this day. I should have, and could have paid it more attention, regardless of is direction vs. my dreams.
Share your failure
Again, my team didn’t see this project as a failure, and they never called me or my effort a failure, but failure doesn’t need to have external judges. We are all our own arbitrator of what is a success, or not, in our work.
For me, this project was a failure.
I am sure you too have worked on a project that you considered a failure, regardless what the project team said, or what was written in the final report. So might it be time that you were honest with yourself and your peers?
Could it be time for you to do like I’ve just done and share your failure with others?
Here is the perfect opportunity: Fail Festival DC 2017 is coming up on December 7, 2017, and there you can share your failure with 300 of your supportive peers in a fun, off-the-record night of laughter and acceptance.
So what are you waiting for? Apply today!
Great post. No one can ever fault you for not putting yourself out there and being honest about your own work. Well done sir.
I find your remarks on the motivation of your peers to be irresponsible. You write that you were the only one on the team that wanted to have an impact; that the researchers only did so to get a publication. I sincerely doubt that was their sole motivation, and ask that you speak with them about their motivation and revise this post. Articles in academic journals are an important component to publicizing the fact that the work was done, and that it was done with rigor. A publication lets other researchers, program designers, and especially funders know what worked or didn’t, and why, all so that new interventions can be designed and tested, all to have a better impact. Perhaps their zeal to have a publication was the same motivation you had: impact. Measurable, rigorously-implemented impact. We’re all on the same team here, I promise.
Caleb, my point was that I wanted to have my version of impact, not the version of impact that my team wanted. It really doesn’t matter what their version of impact was. I should’ve accepted my team’s version of impact. That was the crux of my failure.
As to the impact of academic papers, there’s a reason I came up with the JadedAid card: “A research grant to find evidence that evidence-based research for policy makers is used by policy makers to make evidence-based policy.”