I created a digital twin using Generative AI to test how well the technology can mimic human writing patterns. Spoiler alert: GenAI is not replacing me anytime soon. One cool aspect of my digital GenAI twin is seeing all the questions people ask “me.” There are certainly a few odd questions that people pose, and a few that get me thinking. Here is a topical question we’ll explore today:
“What is the climate impact of Generative AI, including electrical power and freshwater usage?”
This question brings up a valid point that my GenAI Twin could not answer: What is the environmental cost of using Generative AI tools like ChatGPT?
Generative AI Power and Water Usage
New Generative AI solutions are spiking data center demand for electrical power and fresh water. Data centers require copious amounts of stable electricity for their racks of serves and intricate cooling systems to keep those servers operating efficiently.
Electrical Usage
Data centers get their electricity from many sources, and the most common energy source is fossil fuels. This conflicts with many technology firms promising green computing – powering their systems via renewable energy sources. Since there isn’t enough renewable power in the US for current data center needs, companies like Google are going nuclear.
There are now estimates that electricity consumption could double by 2026 for data centers, AI, and the cryptocurrency sector, consuming as much energy as Japan. Yet past predictions of data center needs were equally high and did not come to pass due to a focus on energy efficiency. For example, data center industry reports speak of a six-fold increase in usage demands from 2010-2018 but only a 6% increase in electrical usage.
Water Consumption
“A single GenAI email uses a glass of water” is great click-bait from the Washington Post, ironically promoting more data center use. The underlying academic paper goes into the fresh water usage of Generate AI without as much hyperbole, though the facts are clear.
All data centers require massive cooling systems, and water is the easiest and cheapest method today. Yet a water cooled data center drawing from an aquifer is very different than one drawing from a river, and municipalities should price water accordingly.
Still, computer cooling usage pales in comparison to other water usages. Many studies show agriculture using 70% of all freshwater, with each almond using a gallon of water to grow or every pound of beef requiring 1,800 gallons of water. In fact, beef could account for over 20% of global carbon emissions.
Yet there is no slow down in beef or almond consumption. Then there are duplicitous bottled water companies.
Big Tech Response: Efficiency!
Data center and big tech companies are laser focused on reducing GenAI electricity and water costs. Owners already know there are few revenue models that generate anything near GenAI costs (though many models are wildly successful in hype generation).
For example Apple is trying to have Apple Intelligence running only on consumer hardware, not data centers, and tech companies are moving GenAI solutions to underutilized data centers to reduce their carbon footprint by ~100-1000X.
GenAI & Data Center Usage is Increasing
USAID has been using artificial intelligence since 1983 in its programs, and I was experimenting with ChatGPT in 2019. Then OpenAI released ChatGPT 3.0 in November of 2022, and there no going back to a pre-GenAI world.
The current GenAI hype is driving exploration of all types of AI, which is getting more useful every day. I’ve even heard a USAID thought leader saying that AI will have the most dramatic impact on development of any modern technology.
While that may be debatable, AI (and GenAI) are already pervasive in our lives. For example, you already use AI with Amazon, Gmail, Netflix, WhatsApp, etc. We all will use more AI every day from now on. Here are 18 ways we are already using artificial intelligence in development.
We cannot be Generative AI Luddites in digital development.
No One Cares About Climate Change
GenAI uses water and electricity – as does every digital technology – and every online action runs through a data center. Worries about electricity and water will not stop AI or GenAI usage by governments (including our own), and certainly will not stop the private sector.
In addition, you and I might care about climate change. I care deeply. However, climate is not a major concern for most people, and AI’s impact on climate is even less a concern.
If we want to encourage government regulation of AI systems (which I actually do), we should focus on the concerns that matter to voters and politicians. Climate ranks low on every poll, with one showing that more than half of Americans are unwilling to pay any amount of money to combat climate change.
Overall, people really fear loosing their jobs to AI and white-collar workers fear GenAI specifically. This may be a better angle for regulating AI domestically and in advanced economies. Not so much in developing countries. LMIC governments are embracing AI.
Better Generative AI Arguments
For international development specifically, I believe we have a stronger role in advocating for responsible uses of AI, than trying to stop it. In fact, I’d argue we have a moral obligation to promote responsible AI practices in areas where the harms are tangible and immediate.
Our colleagues in Democracy, Governance, and Human Rights are drowning in bad actor usage of GenAI that is exacerbating mis- dis- and mal-information at scale, deep fakes in elections, etc. Just look at the new DRG Policy’s mention of AI or the Advancing Digital Democracy RFP. Then there is fake news in disaster response, irresponsible health messaging, and the list goes on…
There are many resources that give strong arguments for measured approaches to AI use. I am particularly proud of the USAID Responsible AI training course (I’m biased, I was a trainer) and the USAID AI Action Plan for highlighting the real pros and cons of AI usage.
I would advocate that we highlight the tangible and resonating AI harms and responses that are already documented when discussing GenAI usage in international development. We can also incorporate environmental concerns as they are real and immediate. Yet as a secondary point, using thoughtful arguments that recognize resource usage in context and continuous technology innovation to reduce costs.
What Are Sustainable AI Alternatives?
My good friend, Linda Raftree, points out that the US will not be regulating AI climate impacts with this new Administration. We need to be thinking:
- Is sustainable, positive use of AI even possible?
- What do we do if we care about climate and the environment, and we also want to use AI?
- Are there bigger picture, sustainable ways we can support climate friendly AI?
Her friend, Cathy Richards, puts forth ideas for what a small, climate-conscious AI would be:
- Federated/Decentralized. It should move away from centralized, resource-intensive Big Tech systems that are ultimately susceptible to failure.
- Low resource: Climate-conscious AI should be able to function using minimal resources and focus on specific, local needs.
- Contextual: Initiatives like those of Indigenous AI can reducing the need for massive datasets in favor of datasets that value cultural diversity and local relevance.
- Transparent: Clarity on its carbon and energy usage so users to understand LLM’s ecological impact.
- Work with nature: Projects like Solar Protocol showcase how AI systems can operate according to environmental dynamics, running only when renewable resources are available or incorporating “sleep” modes based on actual demand rather than peak capacity.
- Intentional: We need to ask ourselves if we really need AI for the task at hand. As Sasha Luccioni, climate lead at Hugging Face stated, do we really need AI to navigate from point A to point B?
The incoming administration will continue its push towards AI deregulation, people and planet be damned. We have the opportunity to change the narrative to a better AI usage.
Really interesting article, thanks Wayan. I agree that in international development we have a responsibility to both raise awareness amongst digital users of responsible use of AI and also promote responsible AI practices amongst technology developers. At CABI we’re engaging with rural farming communities to explore use of AI tools for advisory delivery and there’s some awareness of the pitfalls with AI tools but definitely more that could be done. I’m particularly interested in the USAID Responsible AI training course you mentioned – is this publicly available somewhere? Who is it aimed at?
The USAID Responsible AI training course is only for internal USAID staff, though I could see its utility as a public training. Sadly, the rules around creating anything public at USAID can be quite foreboding.
As we know, this is a big tension in current debates around climate and sustainability overall – the people versus planet discussion. It’s not a dichotomy and pushes us to have to think about both short term and long term solutions at the same time – which is easier said than done!
As we know, this is a big tension in current debates around climate and sustainability overall – the people versus planet discussion.
It’s not a dichotomy and pushes us to have to think about both short term and long term solutions at the same time – which is easier said than done!
At Change Agent AI, 100 users use as much electricity in a month as it takes to charge an EV for a month… and our climate impact per user is dropping even lower.
We serve clients who mostly do care and we care, too. That’s why we made sure that the development of our LLM was carbon neutral. Then we designed our backend architecture to minimize emissions.
It is wholly possible to do this—AI companies just have to care enough to do it and their customers have to care enough to demand it… which is the same formula as ever.
Good job being provocative, Wayan Vota. I imagine you might trigger some solid engagement with this one. 🙂
Ha! I’m not trying to be proactive per sea, but I do want to counter a narrative that somehow the power/water usage of GenAI will move gov to regulate tech or slow GenAI use. It will not, but better arguments might
I was on two calls just this morning. What did both Europeans ask about? The Climate impacts of AI and how they could better understand and measure them internally so that they could be consistent in their company values and their use of emerging AI.
So, people do care. Companies in Europe care, even if most in the US don’t.
We need to change the narrative in the places where people don’t care. It’s the same as other tech issues like privacy – remember when people said people don’t care about privacy and “privacy is dead”? But by then it was too late to walk anything back on how our data was being hoovered up.
Anyway, the US isn’t going to regulate this any time soon, so are we just going to ignore it? Or are we doing to do something as communities and individuals and those business owners who do care?
That applies to Climate (and everything else that is going to get deregulated over the next 4 years in the US.)