⇓ More from ICTworks

3 Types of Generative AI Risks and Harms in Foundational LLM Models

By Guest Writer on March 6, 2025

foundational GenAI LLM Risks and Harms

The rapid proliferation of generative AI foundational large language models has become a double-edged sword, with expansive capabilities on one hand and profound ethical, social, and environmental challenges on the other.

The paper “Mapping the Individual, Social, and Biospheric Impacts of Foundation Models” underscores the urgent need to address the wide-ranging impacts of these powerful technologies. It highlights the individual, social, and environmental risks, and provides a framework for understanding the full spectrum of challenges posed by foundation models.

For organizations working in the humanitarian sector, this research offers a crucial guide for developing AI systems that are not only effective but also equitable and sustainable. The path forward requires a holistic approach that integrates technical, ethical, and socio-environmental considerations to ensure that the benefits of AI are realized without exacerbating existing harms.

GenAI Foundational LLM Models

Since the debut of ChatGPT in late 2022, foundation models for Generative AI have dominated discussions in policy, academia, and public discourse. Two key differentiating characteristics of foundation LLM models are their massive scale and widespread embeddedness.

Foundation models comprise hundreds of billions of parameters, trained on mountains of data, that consume enormous resources for both training and deployment. In particular, the scale of foundation models means that the risks and harms they present are not only likely to be magnified and amplified, but that this will happen in ways which transcend national and political boundaries, requiring a multi-pronged and transnational response.

Another differentiating characteristic of foundation models is their embeddedness. Foundation models are conceptualized and architected as the base models for many and diverse types of downstream applications. The embeddedness of foundation models renders them invisible yet pervasive. As a result of their platformized architecture, foundation models form the basis of many thousands of extensions, and as such, the negative impacts and harms stemming from foundation models may be obfuscated and rendered relatively
intractable.

3 Types of GenAI Model Risks and Harms

These two characteristics—scale and embeddedness— position foundation models to be both highly adaptive, highly elusive, and highly dangerous. They are not just technological marvels but also potent agents of societal change. However, their deployment comes with significant risks and harms that need careful consideration.

1. Individual Risks and Harms

One of the most glaring issues with foundation models is their potential to perpetuate and even amplify biases and harmful stereotypes. According to the paper, 40% of the reviewed literature raises concerns about these models reinforcing hegemonic views and societal biases on an unprecedented scale. These biases can manifest in discriminatory outcomes affecting individuals’ safety, health, and well-being, thereby undermining fundamental rights and freedoms.

Moreover, the reliability of these models is not always consistent, leading to undesirable performance outcomes that can have significant repercussions for individuals. These inconsistencies necessitate a cautious approach to their deployment, especially in sensitive areas like healthcare, legal systems, and education.

2. Social Risks and Harms

Socially, the paper highlights how foundation models can exacerbate misinformation, disinformation, and propaganda. Approximately 20% of the reviewed literature points to the creation and spread of false information as a critical concern. This can destabilize societal trust, affect democratic processes, and enhance the potential for cybersecurity threats and fraudulent activities.

The socio-economic impacts are also profound. The reliance on proprietary software and lack of transparency can lead to market monopolization and perpetuate existing inequalities. These models often entrench power within a limited number of actors, primarily those with the resources to develop and maintain such technologies, further marginalizing less-resourced communities and organizations.

3. Biospheric Risks and Harms

The environmental impact of foundation models is another critical area of concern. Training these large models requires immense computational power, leading to significant carbon emissions and energy consumption. For instance, the training of Google’s BERT model alone was found to have a carbon footprint equivalent to a transatlantic flight.

Additionally, the development of these models often involves the extraction of rare earth elements, which not only degrades the environment but also disrupts local communities, particularly in the Global South. This process of “slow violence” disproportionately affects marginalized communities, replicating patterns of environmental injustice.

Need for Holistic GenAI Governance

Given these extensive and interconnected risks, the paper argues for a comprehensive approach to AI governance. Current governance frameworks, particularly those in Europe and the United States, have focused predominantly on technical safety and catastrophic risks, often overlooking the broader social and ethical implications.

The authors advocate for an integrative perspective that accounts for the socio-technical interdependencies of foundation models. This approach emphasizes the need for policies that address not just the direct impacts on individuals but also the cascading effects on social structures and the environment.

Implications for Humanitarian Organizations

For organizations working on Generative AI systems for humanitarian efforts, the findings of this paper are particularly salient. The risks outlined highlight the importance of developing AI systems that are not only technically robust but also ethically sound and socially responsible. Humanitarian organizations must prioritize transparency, equity, and sustainability in their AI initiatives.

These organizations are often on the front lines of addressing the very issues that foundation models can exacerbate—inequality, misinformation, and environmental degradation. Therefore, they have a critical role to play in advocating for and implementing AI practices that mitigate these risks. This includes pushing for greater transparency in AI development, advocating for policies that address environmental impacts, and ensuring that AI applications do not perpetuate harmful biases or deepen existing inequalities.

An edited synopsis of Mapping the Individual, Social, and Biospheric Impacts of Foundation Models

Filed Under: Solutions
More About: , , , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Leave a Reply

*

*