Abstract
This paper considers a class of experimentation games with L´evy bandits encompassing those of Bolton and Harris (1999) and Keller, Rady and Cripps (2005). Its main result is that efficient (perfect Bayesian) equilibria exist whenever players’ payoffs have a diffusion component. Hence, the trade-offs emphasized in the literature do not rely on the intrinsic nature of bandit models but on the commonly adopted solution concept (MPE). This is not an artifact of continuous time: we prove that such equilibria arise as limits of equilibria in the discretetime game. Furthermore, it suffices to relax the solution concept to strongly symmetric equilibrium.
Keywords
Two-Armed Bandit; Bayesian Learning; Strategic Experimentation; Strongly Symmetric Equilibrium.;
JEL codes
- C73: Stochastic and Dynamic Games • Evolutionary Games • Repeated Games
- D83: Search • Learning • Information and Knowledge • Communication • Belief
Reference
Johannes Hörner, Nicolas Klein, and Sven Rady, “Overcoming Free-Riding in Bandit Games”, TSE Working Paper, n. 20-1132, August 2020, revised February 2021.
See also
Published in
TSE Working Paper, n. 20-1132, August 2020, revised February 2021