A deep reinforcement learning based research for optimal offloading decision
Jianji Ren, Donghao Yang, Yongliang Yuan, Huihui Wei, Zhenxi Wang- General Physics and Astronomy
Currently, a concern about power resource constraints in the distribution environment is being voiced increasingly, where the increase of power consumption devices overwhelms the terminal load unaffordable and the quality of power consumption cannot be guaranteed. How to acquire the optimal offloading decision of power resources has become a problem that needs to be addressed urgently. To tackle this challenge, a novel reinforcement learning algorithm named Deep Q Network with a partial offloading strategy (DQNP) is proposed to optimize power resource allocation for high computational demands. In the DQNP, a coupled coordination degree model and Lyapunov algorithm are introduced, which trade-offs and decouples the relationships between local-edge and latency–energy consumption. To derive the optimal offloading decision, the resource computation utility function is selected as the objective function. In addition, model pruning is availed to further improve the training time and inference results. Results show that the proposed offloading mechanism can significantly decrease the function value and decline the weighted sum of latency and energy consumption by an average of 3.61%–7.31% relative to other state-of-the-art algorithms. Additionally, the energy loss in the power distribution process is successfully mitigated; furthermore, the effectiveness of the proposed algorithm is also verified.