DOI: 10.1111/nyas.15322 ISSN: 0077-8923

Shared autonomy between human electroencephalography and TD3 deep reinforcement learning: A multi‐agent copilot approach

Chun‐Ren Phang, Akimasa Hirata

Abstract

Deep reinforcement learning (RL) algorithms enable the development of fully autonomous agents that can interact with the environment. Brain–computer interface (BCI) systems decipher human implicit brain signals regardless of the explicit environment. We proposed a novel integration technique between deep RL and BCI to improve beneficial human interventions in autonomous systems and the performance in decoding brain activities by considering environmental factors. Shared autonomy was allowed between the action command decoded from the electroencephalography (EEG) of the human agent and the action generated from the twin delayed DDPG (TD3) agent for a given complex environment. Our proposed copilot control scheme with a full blocker (Co‐FB) significantly outperformed the individual EEG (EEG‐NB) or TD3 control. The Co‐FB model achieved a higher target‐approaching score, lower failure rate, and lower human workload than the EEG‐NB model. We also proposed a disparity ‐index to evaluate the effect of contradicting agent decisions on the control accuracy and authority of the copilot model. We observed that shifting control authority to the TD3 agent improved performance when BCI decoding was not optimal. These findings indicate that the copilot system can effectively handle complex environments and that BCI performance can be improved by considering environmental factors.

More from our Archive