The world stands at a precarious moment as we work towards shared prosperity. The global community is on track to meet just 17% of the United Nations Sustainable Development Goals (SDGs) by 2030, and progress has plateaued or regressed in many areas.
When used effectively and responsibly, artificial intelligence (AI) holds the potential to accelerate progress on sustainable development and close digital divides, but it also poses risks that could further impede progress toward these goals.
Subscribe Now for more digital development insights!
With the right enabling environment and ecosystem of actors, AI can enhance efficiency and accelerate development outcomes in sectors such as health, education, agriculture, energy, manufacturing, and delivering public services.
The United States aims to ensure that the benefits of AI are shared equitably across the globe. As President Joseph R. Biden said in remarks before the United Nations General Assembly in September 2023, the United States is committed “to ensur[ing] we harness the power of artificial intelligence for good.”
In March 2024, the United States led the adoption by consensus of the first-ever standalone resolution on AI at the United Nations General Assembly, “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development,” which established a global concurrence on twin imperatives: harness the promise of AI and mitigate its risks.
The U.S. government has continued to expand work on AI solutions with global impact and local relevance and to partner with the private sector, civil society, and international partners to strengthen AI enabling environments, while at the same time helping ensure adequate safeguards are in place to protect human rights and public safety.
AI in Global Development Playbook
Building on these efforts and the U.S. government’s long history of digital development and efforts to foster safe, secure, and trustworthy AI, the AI in Global Development Playbook (Playbook) maps a set of key actions for the United States and its development partners, such as other governments, the private sector, and philanthropies, to foster responsible AI ecosystems worldwide, advance sustainable development, create opportunities for partnership and collaboration, and address some of the world’s greatest challenges.
This Playbook is a roadmap to develop the capacity, ecosystems, frameworks, partnerships, applications, and institutions to leverage safe, secure, and trustworthy AI for sustainable development.
All of these characteristics comprise a good governance regime for AI, which is a cross-cutting theme throughout the Playbook. Good governance measures can foster trust among citizens, trust can foster adoption, and adoption can drive innovation.
The Playbook is intended to synthesize existing work and offer recommendations for the design, deployment, and use of safe, secure, and trustworthy AI for sustainable development—including some of the steps the United States intends to take to support AI ecosystems worldwide.
To provide practical guidance, the Playbook also includes a series of case studies. These case studies highlight existing initiatives and organizations doing exemplary work in the identified areas. By showcasing these real-world AI examples, the Playbook aims to inspire and guide others.
The Playbook is especially relevant for development practitioners, policymakers, development and philanthropic organizations, and private sector actors looking to contribute to sustainable development especially in low-and middle-income countries (LMICs).
As technological advances in AI continue, each of these stakeholder groups should be aware of the benefits and risks and each of their important roles in building safeguards and approaches for supporting responsible AI ecosystems in global development.
The diffuse, fast-changing nature of AI technology means no one group of stakeholders alone will be successful in delivering the benefits of AI or mitigating its risks. Collaboration between governments, the private sector, civil society, and academia (both within and between countries) is a central element of our approach in this document.
8 AI Playbook Recommendations
The Playbook maps opportunities and challenges across key themes relevant to global development and humanitarian assistance. Addressing AI’s risks is the first step toward realizing its benefits. This Playbook, in alignment with and building off the AI Risk Management Framework from NIST, identifies how risks (such as harm to individuals, communities, organizations, ecosystems, or societies) can be mitigated in the design, deployment, and use of AI.
The Playbook’s recommendations—distilled from consultation with hundreds of government officials, non- governmental organizations, technology firms and startups, and individuals from around the world—are constructed around several focal areas:
1. Enhancing Capacity, Promoting AI-Related Skills Across All Sectors and Levels, and Protecting the Workforce.
Addressing gaps in the AI workforce by equipping a broader range of individuals with AI skills can allow countries to tap into economic opportunities, drive local innovation, and create jobs that contribute to sustainable development. At the same time, AI brings about changes in the labor landscape that necessitate robust social safety nets for workers, action to prevent and address any new risks to workers’ rights, and social dialogue processes that include workers and unions.
2. Building Trusted and Sustainable Digital Infrastructure.
In addition to playing a pivotal role in development more broadly, widespread Internet connectivity and reliable energy resources can enable AI for sustainable development. Partnering to enhance digital infrastructure will not only improve access to AI technologies but also stimulate economic growth and enable communities to leverage AI for addressing local challenges and enhancing quality of life. Priority should be given to energy efficiency in AI systems, with careful consideration of climate and environmental impacts to not perpetuate existing issues.
3. Broadening Access to Data Storage and Compute Resources.
AI innovators need access to large- scale computing and data storage resources to support model inference, training, and deployment. Making compute more affordable and accessible by expanding access to application programming interfaces (APIs), trusted cloud computing services, and other resources can accelerate the development of AI applications that meet local needs.
4. Creating Representative, Locally Relevant Datasets and Preserving Cultural Heritage.
Locally, linguistically, and culturally relevant datasets that reflect the racial and ethnic diversity of LMICs and their local context and realities, can enable the development and use of AI models that address community needs, are better-suited for use in local contexts, and drive sustainable development. By building representative datasets, stakeholders can work together to enable AI solutions that are more accurate, equitable, and impactful, ultimately fostering inclusive growth and innovation.
5. Developing Strategies to Deliver the Promise of AI in Practice.
Rigorous testing and benchmarking of AI in development contexts and the broad, public sharing of research findings, are essential to ensuring that AI and AI-enabled interventions are grounded in evidence of what works. This evidence is critical to guiding the scale-up of AI solutions, helping to ensure that they provide broad, public value while addressing established development challenges. Systematic assessment of AI initiatives can highlight success stories, inform best practices, and guide investments, making it possible to replicate and scale effective solutions across different sectors and regions.
6. Advancing Good Governance Frameworks for the Development and Use of Safe and Rights- Respecting AI Systems.
AI has the potential to be misused by malicious actors in ways that harm individuals and societies, such as through unlawful or arbitrary surveillance and by facilitating cyber threats, disinformation campaigns, political manipulation, or deepfakes—including synthetic non-consensual intimate images and child sexual abuse material. Proactive measures to advance good governance frameworks must be taken to safeguard democratic processes; enhance transparency; protect cybersecurity, intellectual property, and privacy; ensure alignment with applicable legal frameworks; ensure equitable access to goods and services; and promote human rights, which can help foster trust and resilience in societies.
7. Fostering Trust in AI through Openness, Transparency, and Explainability.
Improving transparency in AI models, development processes, organizational practices, and AI policymaking can enhance reliability, fairness, protection of and respect for intellectual property rights, and accountability. Embracing openness and explainability in how AI systems are designed, deployed, and used can encourage adoption, facilitate transparency, and promote trust in AI systems.
8 Deploying AI Sustainably and for Climate Action.
Seizing opportunities where AI can contribute to energy savings across sectors can play an urgent and important role in addressing the changing climate. Emphasizing sustainability will reduce the net environmental footprint of AI technologies while leveraging AI to enhance energy efficiency in various industries, thereby contributing to global efforts to combat climate change.
The United States is steadfast in its support for sustainable development, responsible AI use globally, and collaborative efforts to enhance AI safeguards. By helping address the challenges outlined in the Playbook—through policy, funding, engagement, partnership, and other mechanisms—the United States hopes to encourage and engage more stakeholders to commit to safe, secure, and trustworthy design, deployment, and use of AI technologies.
The executive summary of the AI in Global Development Playbook
Sorry, the comment form is closed at this time.