Résumé
Applied researchers often combine Difference In Differences (DID) with conditioning on pre-treatment outcomes when the Parallel Trend Assumption (PTA) fails. I examine both the theoretical and empirical basis for this approach. I show that the theoretical argument that both methods combine their strengths – DID differencing out the permanent confounders while conditioning on pre-treatment outcomes captures the transitory ones – is incorrect. Worse, conditioning on pre-treatment outcomes might increase the bias of DID. Simulations of a realistic model of earnings dynamics and selection in a Job Training Program (JTP) show that this bias can be sizable in practice. Revisiting empirical studies comparing DID with RCTs, I also find that conditioning on pre-treatment outcomes increases the bias of DID. Taken together, these results suggest that we should not combine DID with conditioning on pre-treatment outcomes but rather use DID conditioning on covariates that are fixed over time. When the PTA fails, DID applied symmetrically around the treatment date performs well in simulations and when compared with RCTs. Matching on several observations of pre-treatment outcomes also performs well in simulations, but evidence on its empirical performance is lacking.
Mots-clés
Difference in Differences - Matching - Selection Model - Treatment Effects;
Codes JEL
- C21: Cross-Sectional Models • Spatial Models • Treatment Effect Models • Quantile Regressions
- C23: Panel Data Models • Spatio-temporal Models
Référence
Sylvain Chabé-Ferret, « Should We Combine Difference In Differences with Conditioning on Pre-Treatment Outcomes? », TSE Working Paper, n° 17-824, juin 2017.
Voir aussi
Publié dans
TSE Working Paper, n° 17-824, juin 2017