Deep Reinforcement Learning for Intraday Multireservoir Hydropower Management
Rodrigo Castro-Freibott, Álvaro García-Sánchez, Francisco Espiga-Fernández, Guillermo González-Santander de la CruzThis study investigates the application of Reinforcement Learning (RL) to optimize intraday operations of hydropower reservoirs. Unlike previous approaches that focus on long-term planning with coarse temporal resolutions and discretized state-action spaces, we propose an RL framework tailored to the Hydropower Reservoirs Intraday Economic Optimization problem. This framework manages continuous state-action spaces while accounting for fine-grained temporal dynamics, including dam-to-turbine delays, gate movement constraints, and power group operations. Our methodology evaluates three distinct action space formulations (continuous, discrete, and adjustments) implemented using modern RL algorithms (A2C, PPO, and SAC). We compare them against both a greedy baseline and Mixed-Integer Linear Programming (MILP) solutions. Experiments on real-world data from a two-reservoir system and a simulated six-reservoir system demonstrate that while MILP achieves superior performance in the smaller system, its performance degrades significantly when scaled to six reservoirs. In contrast, RL agents, particularly those using discrete action spaces and trained with PPO, maintain consistent performance across both configurations, achieving considerable improvements with less than one second of execution time. These results suggest that RL offers a scalable alternative to traditional optimization methods for hydropower operations, particularly in scenarios requiring real-time decision making or involving larger systems.