An adaptive reinforcement learner of spatial interaction based on taxi trajectory data
Chao Sun, Jian LuThis paper introduces a novel approach for analysing spatial interaction characteristics and land use using taxi trajectory data and urban geographic data. We propose an adaptive reinforcement learning model, based on nonlinear theory, to improve the accuracy and adaptability of spatial interaction predictions. By dividing the urban area into smaller units, we construct a spatial interaction matrix that captures push-pull force characteristics and distance features between origins and destinations. The innovative aspect of our model lies in its ability to integrate multiple weak learners to form a strong learner, which significantly outperforms traditional gravity theory-based models in terms of prediction performance (higher R, lower MAE, and RMSE). Our findings reveal the importance of adjacent flows in predicting spatial interaction patterns and show that travel distance of public transportation is the most significant factor in describing the difficulty of completing spatial interactions. The relative importance of push force from origins is the highest, followed by pull force from destinations and adjacent flows. The results of this study provide valuable insights for traffic and urban planning.