Résumé
I examine how incentives for refutation affect publication quality. I build a sequential model of public experimentation with two scientists, a researcher and a refuter. The researcher chooses when to publish a result confirming a hypothesis, with a probability that the result has type I error. The publication quality corresponds to the probability that the hypothesis is valid, which is the complement of the probability of type I error. Once the result is published, the refuter starts experimenting but, unlike the researcher, he can choose between working on refutation or on an outside safe option at any time. When scientists are initially pessimistic about the hypothesis' confirmation and when refutation is costly for the researcher, the higher the refutation rewards, the lower the equilibrium publication quality in case the researcher is more efficient than the refuter. The opposite result holds when the refuter is more efficient. In an extension, when the researcher's experimentation is private, I prove that the publication quality is lower than in the public experimentation case, suggesting that transparency improves research quality
Référence
Olga Bernard, « Refutation in Research: How to Improve Publication quality », Annals of Economics and Statistics, vol. 138, juin 2020, p. 21–48.
Voir aussi
Publié dans
Annals of Economics and Statistics, vol. 138, juin 2020, p. 21–48