Deep Reinforcement Learning-Graph Neural Networks-Dynamic Clustering triplet for Adaptive Multi Energy Microgrid optimization
Keywords: Deep Reinforcement Learning, Graph Neural Network, Dynamic Clustering, Microgrid, Renewable Energy Sources
Abstract. Centralized energy systems are often limited by their dependence on large, centralized power plants and extensive transmission networks, making them vulnerable to single points of failure and less resilient to disruptions. Microgrids offer resilience, enhanced energy efficiency, and improved integration of renewable resources compared to centralized energy systems, enabling localized energy management and reduced reliance on fossil fuels. Deep Reinforcement Learning (DRL) has shown its potential for microgrid energy optimization by enabling intelligent, adaptive control over energy resources and energy exchange. By learning from interactions with the environment, the DRL agent dynamically adjusts the power outputs of distributed energy resources, manages energy storage systems, and balances energy exchange between microgrid elements and with the main grid, aiming to minimize costs and ensure reliable power availability. However, incorporating spatial relationships into DRL action space significantly increases computational demands. In line with this, we have introduced a novel method that integrates DRL, Graph Neural Network (GNN) and dynamic clustering to optimize microgrid operations. GNNs are specialized deep learning models that adapt to graphs of varying sizes and structures. This adaptability enables GNN-equipped DRL agents to effectively learn from and apply knowledge to a wide range of network topologies. The agent can be used for subsets, or sub-microgrids, taking into account the scalability and efficiency of the optimization process, enabling distance and routing optimization without an aggregated model. This approach addresses the computational challenges associated with large action spaces and varying topologies in microgrid management.