The shift from Responsible AI to Trustworthy AI evident in policies and practice represents an evolution in how organisations and governments conceptualise and implement AI technologies.
The critical question here is, how similar or different are these two concepts? In our view, these are two distinct but related concepts.
Responsible AI vs Trustworthy AI
Responsible AI provides a fundamental theoretical approach on how everyone involved in the AI lifecycle understands and acknowledges their roles in creating ethical, fair, inclusive, explainable, and accountable AI systems while trustworthy AI focuses on the practical aspects of ensuring trust in AI and that AI systems are trusted by users and the society at large.
Responsible AI establishes and defines principles and values fundamental to ethical AI. Responsibility here does not refer to the AI as an artefact but to the responsibility of the individuals or institutions involved in the AI design, development, and deployment to the users, society and the environment.
How to make the whole AI lifecycle from design, development, and deployment responsible and sensitive to human and environmental needs and interests.
Trustworthy AI on the other hand encompasses practices, principles, and approaches to ensure trust in and by users and relevant stakeholders. While Responsible AI provides the ethical foundation, Trustworthy AI deals with the technical and operational implementation of these principles to build and maintain trust in AI technologies.
There is an emphasis on building trust between AI designers, developers, users, and stakeholders through reliability and adherence to ethical standards.
Trustworthy AI seeks to answer the question, what can make AI systems to be trustworthy in a particular ecosystem or context? Therefore, we are asking the question of what can make AI systems developed in and for Africa to be trustworthy?
It is important to note that we are not making an argument for one concept over another, both Responsible AI and Trustworthy AI discussions and approaches are critical in making AI applications more tailored to relevant contexts, and needs as well as more effective for human flourishing.
For us, it is not Responsible AI vs Trustworthy AI. Trustworthy AI approaches build on the theoretical foundations laid down by Responsible AI. Therefore, it is Responsible AI and Trustworthy AI.
Trustworthiness in African Contexts
Trust plays a pivotal role in the acceptability of AI systems. Trust influences attitudes towards AI. From the above, the European commission has conceptualised their perspectives on trust in the requirements set out by the HLEG.
However, as Ewuoso (2023) pointed out, trust and trustworthiness tend to differ among social groups. The underlying conditions that shape these concepts are fundamentally different in different regions. That means that African perspectives of trust are likely different from European perspectives.
Thus, it is important to explore some African perspectives of trust and trustworthiness and how these can influence the role AI can or is allowed to play in Africa. How the parameters of trustworthiness are defined for AI will likely differ between the two regions.
Eke et al. (2023a, 2023b) observed that many African societies are characterised by values and moral principles based on communitarianism.
Conceptualised slightly differently in many African cultures, the idea of communality and interconnectedness are deeply embedded in various aspects of African life, including social structures, decision-making processes, and cultural practices.
From Ubuntu in South Africa, Ujamaa in Swahili, and Umunna in Igbo tribe of Nigeria, belief that an individual’s identity and well-being are inextricably linked to the community’s welfare is emphasised. This manifests in many ways such as mutual support and cooperation, shared values and norms, communal approaches to conflict resolution—community cohesion and more importantly, interdependence.
African societies are therefore an ecosystem where humans, spirits (often represented in animate and inanimate objects) are deeply interdependent. Central to this holistic cultural ecosystem is trust.
Defining Trust in African Languages
The different meanings attributed to trust in African languages highlight the centrality of trust in the communality of African societies. Some of these meanings include ‘dependence’, ‘hope’, ‘expectation’, ‘faith’, and ‘confidence’ (Idemudia and Olawa, 2021).
# | African word for Trust | Meaning |
---|---|---|
1 | ‘igbẹkẹle—Yoruba Nigeria | Dependence |
2 | Ithemba—Zulu—South Africa | Trust, hope, expectation, faith, and dependence |
3 | Imuentinýan/iyegbekọ/Ọmwan imuentinýan—Edo, Nigeria | To depend or rely on someone |
4 | Dogara | Faith or dependency (on God) |
5 | ntụkwasị obi—Igbo, Nigeria | Reliance or dependence (or literarily placing one’s heart or confidence in something or someone) |
6 | ho tšepa ha—Sesotho, South Africa | Confidence |
7 | Tshêpa—Setwana, Botwana | Confidence in someone |
8 | Imani—Swahili, Eastern Africa | “faith” or “belief”, |
9 | Ahoto | Reliance, confidence, or assurance in someone or something |
The above connotations of trust hint at the criticality of trust in the inherently relational values and norms in African societies.
As Ewuoso (2023) pointed out; “trust is both necessary to foster relationships and, at the same time, it is the reason for the existence of the relationship”. This is the concept of trust as relational.
However, in these communities, faith, hope, confidence, or dependence is reposed in someone or something that is in harmony with the community; someone or something that can be trusted or that has demonstrated trustworthiness.
Trust as Social Cohesion
Requirements for trustworthiness are therefore determined by the essentiality of maintaining social cohesion and mutual support and benefits. One of these requirements is consistency and reliability. Others are respect and reciprocity, transparency and openness, accountability and justice.
These are similar to the 7 requirements of trustworthiness in AI explained above.
For instance, ‘transparency’ is critical to the idea of interdependence. Explainable AI or less opaque AI will therefore help to enhance trust (Ewuoso, 2023). However, the difference is that in the European perspective, individuals are emphasised more than the collective: ‘autonomy’ and ‘individual privacy’ over ‘collective privacy’.
In Africa, the principle of solidarity, shared responsibility and collective privacy will take precedence over privacy of the individual. In that sense, the perspectives are dissimilar.
Furthermore, the willingness to maintain harmony and work towards the benefit of society, while refraining from actions that could harm the group, is fundamental to building and sustaining trust in African cultures.
This collective ethos fosters social cohesion and mutual support. As individuals see themselves as integral parts of the community, there is a strong sense of collective responsibility where all actions are expected to contribute to the common good.
Trust underpins this ethos and forms the basis for all social relationships. Applied in AI, the question will be: Does the AI system operate in a way that maintains the harmony of the community? The collective benefit rather than personal benefits will be the focal point.
Additionally, spiritual and ancestral beliefs play a significant role in cultivating trust within African cultures. Trust in spiritual authorities, ancestral guidance, and the supernatural realm helps to reinforce a sense of interconnectedness and collective responsibility within the community.
These beliefs often emphasise the importance of human connection, consciousness, and natural order. In AI this may bring about scepticism or even fear. Some may view AI as a disruption to the natural order or as a challenge to human uniqueness and spiritual beliefs about the soul or consciousness.
This means that in cultures where there’s a strong emphasis on trust in spiritual or ancestral entities, people may be more hesitant to trust AI systems, particularly if they perceive them as separate from or in conflict with their spiritual beliefs.
In this instance, dispelling relevant misconceptions becomes a key part of cultivating trust in the AI systems. Another way of doing this is to align the AI systems with spiritual or ancestral values – for example, by promoting harmony, interconnectedness, or social well-being. This may improve the acceptability of such systems and how they are integrated into daily life.
Colonialism and Trust
Fundamental to this discussion is the influence of colonialism to the central dynamics of African communality.
Colonialism disrupted traditional social structures (e.g. social hierarchies and systems of governance), undermining cultural practices, and eroding trust within communities (Kingston, 2015).
Colonialism brought Western values, norms, and institutions that were often at odds with traditional African cultural practices. They exploited ethnic, tribal, and religious divisions, creating artificial boundaries and fostering inter-group rivalries that undermined solidarity and trust within communities.
Together with the economic, and labour exploitations that characterised the colonial era, effects of the damage done to social structures are still evident till date. Today, the legacies of the colonial era are evident in AI systems in what is often referred to as coloniality.
Therefore, AI will need to prove that it has no colonial tendencies (or that it is in harmony with African contextual needs and values) to be trusted in many parts of Africa. In this book, we introduce decoloniality as an essential requirement for trustworthiness in AI.
This means that AI systems designed, developed or deployed in and for Africa need to ensure that they have no colonial tendencies; what datasets inform them, who is making critical decisions in the design and development process, and who effectively controls the data and the algorithm? These are questions decoloniality as a requirement can help us answer.
Trustworthy AI in African Contexts
Our argument here is that trustworthiness of AI in Africa will include achieving the principles proposed by the EU HLEG such as human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, and fairness, societal and environmental well-being and accountability.
But most importantly, it will encompass aspects such as usability (considering African contexts), accessibility and affordability, decoloniality, and demonstration of adaptability of AI to local contexts. These are concepts or principles that were not highlighted by the EU but that are necessary requirements to achieve trustworthiness in the African concept.
An excerpt of An Introduction to African Perspectives of Trustworthy AI by Damian Okaibedi Eke, Kutoma Wakunuma, Simisola Akintoye, and George Ogoh