⇓ More from ICTworks

3 Strategies for Responsibly Deploying Generative AI in Your Program

By Guest Writer on May 30, 2024

genai user experience

Generative AI (GenAI) is remarkable, but to be truly useful it needs to do things that help your organization achieve its goals, safely, reliably and affordably — especially in the global health sector. With the excitement of getting AI into projects as soon as possible, it can be tempting to simply add a prompt to ChatGPT and call it a day.

However, there are better techniques to ensure AI is more than a nifty feature tacked on to your project and is safely, securely, and thoughtfully deployed. This article outlines three such strategies: 1) guardrails, 2) user experience design, and 3) red teaming.

For this example, we will discuss a simple GenAI that answers questions from a repository of project documents. This uses a common technique called Retrieval Augmented Generation, or RAG, which is a way for a large language model (LLM) to make use of data it hasn’t been trained on, such as a cache of PDFs.

This is important for global health because general LLMs such as ChatGPT don’t know a lot of the jargon that is often used in our reports, nor are they aware of the various organizations and stakeholders that make up our projects. Our RAG is a simple one — based on a knowledge base of documents related to an HIV project, the user can ask a question, and see the sources for that question.

1. Guardrails

What a model can’t do can be even more important than what it can. There are many examples of an AI that was only supposed to talk about a company’s products getting into meandering conversations about irrelevant topics. In public health especially, a user who gets no answer when they want one may be frustrated, but a user who gets a wrong answer may harm others by acting on incorrect information. Guardrails put limits on what the AI can and cannot do.

One of the most important guardrails you can put on the AI is in the prompt — basically a set of instructions you give to the AI that the user doesn’t see. This can specify a personality, or a style of writing, but it can also tell the AI when to refuse the user’s query. The user might ask anything, after all. In our RAG, we specify that it shouldn’t answer questions from outside our PDFs, even if it ‘knows’ the answer already, answering a basic but irrelevant question like so:

In some cases, there may be information available, but not enough to provide a definitive answer. For example, our document repository refers to Cooper/Smith frequently, but nowhere does it give a history or overview of the company. When this happens, we’ve instructed the AI to be explicit about what is in the documents, and what is conjecture:

2. User Experience Design

One of the most important things you can control is the way your users interact with your AI. Since ChatGPT captured the public’s imagination, many people have begun to think of generative AI as inherently being a chatbot, but that is far from the only way to interact with an LLM. There are some key design decisions you can make to help protect you users from shooting themselves in the foot.

In a chatbot-style interface, the user can have a conversation that goes on for a long time. As this conversation gets longer, the AI’s behavior becomes less predictable, and it becomes easier for a user to cajole the AI into saying something beyond its remit, which may be something offensive or inappropriate. Our RAG limits users to a single question, cutting off this important source of unpredictability. If a simpler interface is sufficient for a project’s goals, it’s often good to go with the simplest possible interface.

Interface design can also draw a user’s attention toward or away from certain elements. If the AI answers with a clear, definitive statement, users will often take it at face value as a certainty. If the sources are hidden behind extra buttons or icons, they may not bother to look at them.

In our RAG, we place all sources the AI is using prominently beside its answer, and give the user the ability to not only see which parts it’s referring to, but download the entire originals with a page reference providd. This communicates to the user that they should be double-checking the AI, not just taking it at its word.

3. Red Teaming

Red Teaming is a way of dividing your AI team into two groups — the Blue Team, which builds the AI tool, and the Red Team, which tries to break it.

This is important because with GenAI, it is very easy for things to look like they’re working at first glance, when actually there are hidden dangers and errors that will severely degrade the user experience, and potentially be harmful to the project. The Red Team has a vested interest in finding these vulnerabilities, and ultimately they are the ones who decide whether an AI is ready to be shared with users.

Benchmarks are one of the most important tools at the Red Team’s disposal — basically, a series of questions and expected answers that are written before the Red Team even sees the model. This way, the results are not tailored to the AI itself and can be run repeatedly as it improves.

Ideally, the Blue Team doesn’t see the specific benchmarks so they can’t tailor the AI to pass them and our only general feedback. But I’m on our Blue Team — a more fulsome discussion by our Red Team will appear in our next article!

Conclusion

These are but a few techniques that can be used to increase the reliability and safety of your AI deployments; this article is far from exhaustive. Although we’ve used a RAG example with PDFs here, similar principles apply to other models, such as image scanners or user-training chatbots.

The most important thing is to thoroughly test that the AI does what you want it to do, doesn’t do what you don’t want it to, and is consistent with both. In global health especially, and the humanitarian and development sector more widely, we have a duty to use these powerful technologies in a safe and ethical way.

Cooper/Smith originally published this as Deploying GenAI in the Real World

Filed Under: Solutions
More About: , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.