Résumé
This paper considers a class of experimentation games with L´evy bandits encompassing those of Bolton and Harris (1999) and Keller, Rady and Cripps (2005). Its main result is that efficient (perfect Bayesian) equilibria exist whenever players’ payoffs have a diffusion component. Hence, the trade-offs emphasized in the literature do not rely on the intrinsic nature of bandit models but on the commonly adopted solution concept (MPE). This is not an artifact of continuous time: we prove that such equilibria arise as limits of equilibria in the discretetime game. Furthermore, it suffices to relax the solution concept to strongly symmetric equilibrium.
Mots-clés
Two-Armed Bandit; Bayesian Learning; Strategic Experimentation; Strongly Symmetric Equilibrium.;
Codes JEL
- C73: Stochastic and Dynamic Games • Evolutionary Games • Repeated Games
- D83: Search • Learning • Information and Knowledge • Communication • Belief
Référence
Johannes Hörner, Nicolas Klein et Sven Rady, « Overcoming Free-Riding in Bandit Games », TSE Working Paper, n° 20-1132, août 2020, révision février 2021.
Voir aussi
Publié dans
TSE Working Paper, n° 20-1132, août 2020, révision février 2021