Summary
As climate change alters the physical, built, and social environment that humans depend on for resources, tools to aid climate change adaptation planning are increasingly urgent. In response, MA’AT proposes to combine machine learning with health and environmental modelling to develop a fully functional prototype for a multi-modular AI framework to aid climate change adaptation planning under deep climate uncertainty.
Focusing on climate adaptation policies for rain events within the Copenhagen capital region as a case study, the MA’AT Proof-of-Concept will learn an optimal policy that maximises delayed rewards in the form of societal wellbeing, within an agent-based modelling (ABM) environment.
Using the shared socioeconomic pathways (SSPs) as the basis for probabilistic priors for projecting different exogenous variables over time, this 2-year project will demonstrate a fully integrated probabilistic modelling framework that captures the dynamic interactions between climate change-induced flooding and other stressors due to heavy rain, future adaptation plans, and human wellbeing.
Why MA’AT?
Climate-induced wellbeing loss can occur in the immediate aftermath of a flood as individuals exposed to the event reckon with the damage and trauma of experiencing a natural disaster, or as a gradual, latent loss of the capabilities necessary to lead a fulfilling and meaningful life. Transport systems play a vital role in ensuring individuals’ resilience to climate events both in the short- and long-term, and MA’AT is designed to help identify the pathways that can enable transport systems to keep on playing this role as climate change disrupts human flourishing in both minor and major ways.
The cross-cutting effects of climate change force us to think outside of our disciplinary silos. By employing a system-of-systems approach, MA’AT can showcase how the effects of a changing climate percolate through both physical and social systems to produce divergent outcomes. Our focus on reinforcement learning, meanwhile, allows researchers, policymakers, and stakeholders alike to query which sequences of actions contribute to producing desired results over long periods of time.
How?
Identifying a suitable framework for deriving a climate adaptation policy, heavily depends on the parameters of the ABM environment. A tradeoff between accuracy and efficiency must be made to derive a good policy, that is not too computationally complicated and expensive. Our aim is to compare different approaches such as optimization, Bayesian optimization and different deep reinforcement learning (DRL) frameworks. As model size and complexity increase the most viable approach is to generalize through DRL. The goal is the find the sequence of policy changes that protect Copenhagen in the most socioeconomic way.
This requires the agent to have foresight and the ability to weigh current decision versus future decisions. To best implement this approach, multiple DRL approaches will be compared, both standard approaches, but also more new and sophisticated algorithms that are designed to learn from distributed systems with independent agents.
Using parallelized agents and synchronize the agent experience from multiple agents on to one learner, the CPU and the GPU power of agents learning and interacting with the environment can be distributed over multiple machines and be synchronized onto one learner. This framework should allow for scaling both on action variety, environment complexity (for instance, considering a bigger area than Copenhagen), and larger timeseries or finer grained timesteps.
Team
Publications
1.
Climate Adaptation with Reinforcement Learning: Experiments with Flooding and Transportation in Copenhagen
Miguel Costa, Morten W. Petersen, Arthur Vandervoort, Martin Drews, Karyn Morrissey, Francisco C. Pereira
TACKLING CLIMATE CHANGE WITH MACHINE LEARNING WORKSHOP AT NEURIPS 2024