This year’s technology buzzword is artificial intelligence, which means you’ve already been asked how your organization can incorporate AI and machine learning in your programs.
Hopefully, you answered with this. Or you could be more serious and reply that we all already are using aspects of AI to augment and enhance, not replace, activities we’re already doing, such as running natural language chatbots and utilizing pattern recognition satellite imagery.
Yet, as AI is such a new technology, there are few, if any, resources available to thoroughly evaluate the what, where, and how of using AI in our programs. So far, there are four strong publications to ground our thinking about this new technology:
- Artificial Intelligence in Global Health from USAID and other donors
- Making AI Work for International Development from USAID
- Responsible AI Practices from Google
- Trusted Artificial Intelligence from IBM
While each of these publications advance our understanding of AI, we are still missing a foundational document.
Questions to Ask When Designing Artificial Intelligence Activities
We need to have a set of criteria to evaluate how we are designing and developing AI systems to ensure that we are being responsible with this new technology, evoking the simplest and strongest ethical code: do no harm.
That was the focus of the Technology Salon on How to Evaluate Artificial Intelligence Use Cases for Development Programs? As part of the event, we developed an evaluation framework for artificial intelligence solutions with guidance from these thought leaders:
- Adele Waugaman Senior Advisor, Digital Health, USAID
- Priyanka Pathak, AI for Development Course Facilitator, TechChange
- Shali Mohleji, Technology Policy, Government and Regulatory Affairs, IBM
- Richard Stanley, Senior Technical Advisor, Digital Health, IntraHealth International
Salon members helped draft an AI evaluation framework that built on the Principles for Digital Development to create an approach we can all use in our international development programming.
Please review and edit the draft AI evaluation framework here.
Your input is specifically requested to improve this document, which will serve as the foundation for a future publication.
Humans Are Still Central to Artificial Intelligence
The need for human input and control in every aspect of artificial intelligence activities flowed throughout the Technology Salon and comes through the draft AI framework. Core ideas included:
– It’s our responsibility explain AI
As development practitioners and technology experts, it’s our responsibility to make sure that AI applications, and their components (data, algorithms, output) are explained in a way that our constituents can understand.
– We should augment humans, not replace them
We need to focus the conversation on how AI can augment human decision making and enhance our reach, building on the much-need human touch. This is counter to one current narrative that AI is made to replace human efforts
– Data divides drive many concerns
Like digital divides, there are many data divides. One of the largest is the basic lack of data on the constituents that we’d need for training, using, and validating AI, therefore driving the use of proxy data, which can radically increase bias in results.
Overall, with AI rising up the hype cycle to the peak of inflated expectations, we need to continue discussions like this one to make sure that we can utilize AI for good.
NetHope’s Artificial Intelligence working group also drafted a framework to help development/humanitarian/conservation practitioners think through considerations if AI is appropriate for a particular use case. We are currently testing this with participants from a number of organizations.
Let’s seek to converge this with the output of the Tech Salon into a single tool.
The majority of what we read nowadays about AI and machine learning support the idea that these developments will replace human efforts, especially in fields like customer support and call centers for example. This causes people to panic; everyone is scared that one day they might become jobless because of AI. It’s important to be aware that we still don’t have enough resources to be able to evaluate how, what, where and how we can use AI to augment and enhance the tasks we’re doing. The aim is not to replace human efforts, but augment them. Technology experts shoulder the responsibility of explaining AI applications to people with no technical background. Also, the way AI systems are developed and designed need to be evaluated to ensure that no harm is done.