NEWS • 2025-05-14
Aligning AI development with planetary and societal sustainability
As the world faces the urgent task of substantially cutting greenhouse gas emissions, artificial intelligence (AI) offers both large opportunities and serious risks. To help ensure that AI supports a stable biosphere, researchers propose a new guiding principle – “Earth alignment” – aiming to steer AI development and uses in ways that promote rather than undermine sustainable development for all.

Photo: Nidia Dias & Google DeepMind/ Better images of AI
AI for better or for worse
It is often stated that AI can be used to accelerate innovation and scale climate action, protect biodiversity and reduce environmental degradation. However, the authors of the article, published in Nature Sustainability, points out an aggravating factor:
“AI is deeply embedded in a growing number of technologies, economic activities and peoples’ daily lives. Without a clear direction, the increased infusion of AI will continue to erode climate ambitions, social trust and the fabric of life”, says Victor Galaz, co-author to the article and Programme Director at the Beijer Institute.
Primarily designed for governments, international organisations, companies and investors, the Earth alignment principle focuses on aligning AI’s development and deployment with the need to reduce greenhouse gas emissions, protect biodiversity, and support equitable access to sustainability tools.
The concern isn’t just the high CO₂ emissions from training and using AI, the authors claim, but also that technological improvements that increase efficiency can drive greater resource use, a phenomenon known as the Jevons paradox. For example, smarter logistics may reduce emissions per delivery but enable more consumption and production, raising overall environmental harm. Moreover, the development and use of AI risk causing major social harm, as it could deepen existing inequalities, undermine social stability, and weaken a shared understanding of reality.
Three criteria for Earth alignment
To counter this, the proposed framework includes three criteria that AI development and use need to fulfil to reach strong Earth alignment:
- Accelerate the transition to sustainable production and consumption in ways that respect planetary boundaries or at least do not obstruct them.
- Ensure equitable access to AI tools for global sustainability and prevent power concentration, especially in low-income regions.
- Foster social cohesion and trust and provide access to reliable information for planetary stewardship.
Applying Earth Alignment
The authors call on governments and global bodies like the UN to label AI systems as “high” or “unacceptable” risk if they pose a clear threat to Earth’s stability and to steer investments toward AI projects that support planetary stewardship, favouring open-source initiatives and inclusion beyond wealthy nations. They argue that companies must be report both environmental and societal impact of AI and that companies, organisations and other entities developing and using AI tools should include Earth alignment within their governance frameworks and risk assessments.
“Responsible uses of AI offer intriguing opportunities to the sustainability sciences, and can be a powerful tool to industries, communities and other change-makers at the driver’s seat of a climate transition. However, these benefits will not materialise without increased transparency, oversight and regulation of AI that focuses on mitigating systemic sustainability risks”, concludes Victor Galaz.
Reference: Gaffney, O., A. Luers, F. Carrero-Martinez, B. Oztekin-Gunaydin, F. Creutzig, V. Dignum, V. Galaz, N. Ishii, F. Larosa, M. Leptin, and K. Takahashi Guevara. 2025. The Earth alignment principle for artificial intelligence. Nature Sustainability.
NEWS