Linear Program-Based Policies for Restless Bandits: Necessary and Sufficient Conditions for (Exponentially Fast) Asymptotic Optimality
Nicolas Gast, Bruno Gaujal, Chen Yan- Management Science and Operations Research
- Computer Science Applications
- General Mathematics
We provide a framework to analyze control policies for the restless Markovian bandit model under both finite and infinite time horizons. We show that when the population of arms goes to infinity, the value of the optimal control policy converges to the solution of a linear program (LP). We provide necessary and sufficient conditions for a generic control policy to be (i) asymptotically optimal, (ii) asymptotically optimal with square root convergence rate, and (iii) asymptotically optimal with exponential rate. We then construct the LP-index policy that is asymptotically optimal with square root convergence rate on all models and with exponential rate if the model is nondegenerate in finite horizon and satisfies a uniform global attractor property in infinite horizon. We next define the LP-update policy, which is essentially a repeated LP-index policy that solves a new LP at each decision epoch. We conclude by providing numerical experiments to compare the efficiency of different LP-based policies.
Funding: This work was supported by Agence Nationale de la Recherche [Grant ANR-19-CE23-0015].