A chatbot can be defined as “a computer program designed to simulate conversation with human users, especially over the Internet”. Chatbots are trained to respond to voice, image-based and/or text input with varying degrees of sophistication.
Simple bots are restrained by pre-determined scripts, such as a list of frequently asked questions (FAQs). These bots can neither hold advanced conversations nor deviate from their scripts. More advanced bots use Natural Language Processing (NLP) and Machine Learning (ML) to mimic human conversation.
With some humanitarian organisations having experimented with chatbots for several years now, many are now interested in taking stock of their experiences and fostering greater awareness of how to design, budget for and maintain chatbots responsibly and effectively.
Responding to these priorities, The Engine Room has developed Chatbot Deployments in Humanitarian Contexts to explore the existing uses, benefits, trade-offs and challenges of using chatbots in humanitarian contexts. This research has resulted from a collaboration with IFRC, ICRC, and a research advisory board consisting of representatives from IFRC, ICRC, the Netherlands Red Cross and UNHCR.
7 Lessons Learned Using Humanitarian Chatbots
Below are seven relevant lessons learned by the humanitarian practitioners we heard from, with critical consideration given to the limits of chatbots, and to when and how chatbots can be appropriate and effective solutions in humanitarian operations.
1. Assess if a chatbot is the appropriate solution
Before deciding whether to implement a chatbot, the interviewees we spoke to urged organisations to be clear on what issue they specifically want the proposed chatbot to address, and to assess whether a chatbot is an appropriate tool for this. If the organisation decides that chatbots are the right tool, it’s important to ensure there is a shared understanding within the organisation on what chatbots can and cannot do.
Humanitarian actors should also weigh the potential benefits of chatbots against the ethical imperatives that come with a humanitarian mandate. This includes critical reflection on the degree to which automated interactions can fulfil the needs of the populations a humanitarian organisation is aiming to serve, as well as the responsible data risks opened up by the introduction of technologies such as artificial intelligence and machine learning.
All of the humanitarian staff and chatbot technologists we spoke to said that humanitarian organisations should have a shared understanding of the resources necessary for success before going ahead with chatbot deployment, taking account of the fact that a chatbot is not a one-time investment, but rather a tool and infrastructure that will need to be maintained and changed over time.
Considerations include the resources and capacity required to integrate a chatbot into organisational workflows, language translation and contextual adaptation, infrastructure needs, staff training and internal/external technical capacity needs and maintenance costs.
2. Chatbots may increase responsiveness
Responding individually in a timely manner to urgent requests for information, guidance and services is a common challenge for humanitarian organisations, particularly in the context of acute crises, budget concerns and staff overwork.
For humanitarian organisations, the hope has been that chatbots can contribute to more responsive and personalised interactions between the organisation and those they serve. According to several humanitarian staff we interviewed, interest in chatbot use by humanitarian organisations peaked during the early stages of the COVID-19 pandemic, when access to field sites was diminished, queries and requests for assistance increased and digital services grew in relative importance.
In this context, chatbots have been seen as a potential stand-in for in-person contact, increasing the efficiency of humanitarian response, preventing burnout among humanitarian staff and allowing them to focus on difficult or demanding requests rather than
“sitting on the phone answering the same questions 30 times a day.”
However, three years into the pandemic (at the time of writing), the humanitarian staff we spoke to emphasise the need to maintain, and in certain cases even to increase, in-person services and outreach, underlining that chatbots cannot be understood as a replacement for other forms of programming, engagement or communication.
Our research suggests that in the best case, chatbots can rather be seen as a complementary component of a larger ecosystem of tools, services, and communication channels, supporting the effective provision of basic information and serving as a “triage” mechanism to aid information provision and steer people to services and human assistance – the “hows” of which we discuss again later in the report.
3. Limits to chatbot personalisation
People interacting with humanitarian organisations express a desire for personalised interaction that most chatbots deployed by humanitarian organisations do not currently provide. Several practitioners we spoke to told us that people interacting with their chatbot have demonstrated an expectation of being able to speak to someone capable of addressing complex questions and providing personalised responses, with a demand for high-quality communication, especially in emergency situations.
A chatbot’s technical limitations constrain what kinds of interaction are possible: for example, simple and mid-range chatbots do not recognise typos, slang or phonetic typing. Relatedly, a lot of the time, chatbot app users share statements rather than questions – which many chatbots that are used in the sector weren’t necessarily built to decode.
These issues can contribute to frustrating error loops, especially when using bots that don’t have “memory” (in other words, when they don’t store past information given by users). Setting aside these technical constraints, however, many of those we interviewed emphasised that even the most sophisticated, AI-driven chatbots should not – and cannot – replace human contact, especially in high-stakes scenarios such as the provision of mental health care for vulnerable populations.
4. Tensions between automation and efficiency
“Automation can cause as many problems as it solves,” one humanitarian staff member levelled. One such “problem” is that efforts at greater efficiency through automation can lead to unintended consequences: several humanitarian staff members highlighted that automating exchanges hasn’t necessarily lessened the amount of work needed to be done by organisational staff, especially those responsible for communicating with and responding to requests that come through the chatbot.
Several staff members we spoke to noted that chatbots can instead create new forms of manual work or “double work” for staff, including scanning chatbot interaction transcripts to ensure people’s needs were met (and follow up where necessary), collecting additional information left incomplete, and transferring information from chats with users to data collection platforms and back to analyse the data.
Considered within the broader organisational context, the “double work” created by chatbot use can be understood as an expected part of the process of integrating new digital tools; it demonstrates a need to integrate chatbots within existing workflows, practices and policies.21 One humanitarian staff member shared that after going through a process of extensively testing and integrating chatbots within their organisation, their staff did observe some time-saving gains, with staff reporting that this allowed them to spend less time answering repetitive questions and focus on complex cases instead.
However, some of this work introduced by chatbot adoption is unavoidable: for many we spoke to, it’s a priority to respond to people individually when a chatbot fails to answer their questions or requests, to continue to meet the organisation’s mandate and minimise reputational risk. For one interviewee’s organisation, concerns over providing adequate responses – on top of the additional work required to obtain complete data – led the organisation to, after initially trying chatbots, subsequently opt for simpler forms of data collection such as forms and surveys.
5. Chatbot feedback isn’t representative of community needs
Humanitarian organisations are interested in how chatbots can be used as a feedback mechanism to gain insights into population needs and thus allow organisations to better shape the services they provide, especially where they do not have a physical presence in a particular country or region. There are two types of feedback that can come into play: the first is feedback about the chatbot itself, and the second concerns the humanitarian organisation’s work and services more broadly.
To collect the first type of feedback, organisations use tools such as customer satisfaction surveys, though they note that people often leave these unanswered. To supplement these structured forms of feedback collection, organisations also analyse “passive” user data such as click-through rates from the chat app to the organisation’s website, to see if users can find the information they are looking for. As noted earlier, the humanitarian organisations that we spoke to also review chatbot interactions manually to find out if people have received adequate responses to their queries.
The second type of feedback concerns the humanitarian organisation’s work more broadly. Here, “feedback” might not necessarily be a structured review of the humanitarian organisation and its services, but rather a request or complaint regarding specific services by someone using the organisation’s chatbot user or making an initial inquiry. Interviewees said that often this kind of feedback came in not through designated digital channels for complaints or feedback, but rather through whichever channel a person was already using to interact with the chatbot.
The feedback received sometimes demonstrated a mismatch between the intended purpose of the chatbot and its actual use. For example, the staff member of one organisation we spoke to set up a chatbot to combat COVID-19-related misinformation but saw that many of the recorded interactions were from people seeking food aid, which the organisation wasn’t placed to provide.
A staff member at another organisation observed that people used the chatbot as a tool for reporting potentially dangerous or risky situations, something the chatbot wasn’t designed for. This staff member highlighted that this kind of “misdirected” feedback can signal important needs, and emphasised that organisations need to plan for the eventuality that a chatbot designed for a specific purpose may be used differently (a form of feedback in itself).
Our discussions also revealed some broader takeaways on chatbots as a feedback mechanism: the humanitarian staff we spoke to urged organisations to not over-rely on chatbots or digitally-collected feedback in general. In some cases, those we spoke to observed that the proportion of people that gave feedback was too small for their feedback to be extrapolated into a wider learning. Others pointed out that the populations most in need often lack access to digital tools, and without their perspectives, it would be difficult to shape services that are able to respond to their needs. Furthermore, the more organisations “listen” to communities through the collection and analysis of digital data, the more likely responsible data concerns are to arise (we look at this in more detail later in this report).
6. Effective chatbots are integrated with comms channels
Several humanitarian staff members and chatbot technologists said that their chatbots were most successful as part of a “hybrid” communication system that clearly links the chatbot to services staffed by people – described by one humanitarian innovation expert as “adequate oversight or monitoring from a staff member who is able to manage, address or refer select messages that require follow up.”
They added that” any efforts to create these solutions need to be accompanied by organisational and cultural change that essentially morphs and potentially increases direct community engagement by humanitarian personnel.” Such organisational change might require examining the ways internal knowledge management and information sharing processes behind the information a chatbot provides could fit with (rather than duplicate) the processes behind other communication channels.
As a concrete example of how chatbots can complement other channels for direct communication with communities, a chatbot technologist working with a humanitarian organisation pointed to a case where a humanitarian team ran a radio phone-in alongside a chatbot channel. They saw these two channels as informing one another, which they felt increased the value of the overall engagement. Speaking of lessons learned from this programme, they argued that “chatbots are a channel of communication alongside others. It won’t replace them, but it can augment them.”
7. Transparency & expectation are key to chatbot trust
Expectation management around what chatbots can and cannot do is essential, given that chatbots should be understood as just one part of a broader communications and service provision strategy.
Most of the interviewees we spoke to argued that it’s also essential for humanitarian organisations to be transparent and explicit about when people are interacting with bots, as not doing so can lead to frustration and confusion. One humanitarian communications expert estimated that only around one third of those interacting with the chatbot on their organisation’s social media channels clearly understood that they were speaking with a chatbot, while the majority only came to understand they were speaking to a bot after being asked repetitive questions by the bot (with resulting frustration).
An edited excerpt from Chatbot Deployments in Humanitarian Contexts by The Engine Room
Sorry, the comment form is closed at this time.