Working paper

Regret bound for Narendra-Shapiro bandit algorithms

Sébastien Gadat, Fabien Panloup, and Sofiane Saadane

Abstract

Narendra-Shapiro (NS) algorithms are bandit-type algorithms that were introduced in the 1960s in view of applications to Psychology or clinical trials. The long time behavior of such algorithms has been studied in depth but it seems that few results exist in a non-asymptotic setting, which can be of primary interest for applications. In this paper, we focus on the study of the regret of NS-algorithms and address the following question: are the Narendra-Shapiro (NS) bandit algorithms competitive from this non-asymptotic point of view? In our main result, we show that some competitive bounds can be obtained in their penalized version (introduced in [14]). More precisely, up to a slight modification, the regret of the penalized two-armed bandit algorithm is uniformly bounded by C \sqrt{n} (where C is a positive constant made explicit in the paper). We also generalize existing convergence and rate of convergence results to the multi-armed case of the over-penalized bandit algorithm, including the convergence toward the invariant measure of a Piecewise Deterministic Markov Process (PDMP) after a suitable renormalization. Finally, ergodic properties of this PDMP are given in the multi-armed case.

Keywords

Regret; Stochastic Bandit Algorithms; Piecewise Deterministic Markov Processes;

Replaced by

Fabien Panloup, Sofiane Saadane, and Sébastien Gadat, Regret bound for Narendra-Shapiro bandit algorithms, Stochastics, May 2018, pp. 886–926.

Reference

Sébastien Gadat, Fabien Panloup, and Sofiane Saadane, Regret bound for Narendra-Shapiro bandit algorithms, TSE Working Paper, n. 15-556, February 2015, revised May 2016.

See also

Published in

TSE Working Paper, n. 15-556, February 2015, revised May 2016