19 octobre 2022, 12h30–13h30
Auditorium A4 - level 1
Digital Workshop
Résumé
The best-performing and most popular algorithms are often the least explainable. In parallel, there is growing concern and evidence that sophisticated algorithms may engage, autonomously, in prot-maximizing but welfare-damaging strategies. Drawing on the literature on self-regulation and following recent regulatory proposals, we model a regulator who seeks to encourage algorithmic compliance through the threat of (costly and imperfect) audits. Firms may invest in \explainability" to better understand their own algorithms and reduce their cost of compliance. We nd that, when audit ecacy is not aected by explainability, audit regulation always induces investment in explainability. Mandatory disclosure of the explainability level makes regulation even more eective, because it allows rms to signal compliance. If, instead, explainability facilitates regulatory audits a rm may attempt to hide a potential misconduct behind algorithmic opacity. Because of regulatory opportunism, mandatory disclosure may further deter investment in explainability. In these cases, regulatory audits may be counterproductive and laissez-faire or minimum explainability standards should be envisaged.