Article

Gaussian Agency problems with memory and Linear Contracts

Eduardo Abi Jaber et Stéphane Villeneuve

Résumé

Can a principal still offer optimal dynamic contracts that are linear in end-of-period outcomes when the agent controls a process that exhibits memory? We provide a positive answer by considering a general Gaussian setting where the output dynamics are not necessarily semimartingales or Markov processes. We introduce a rich class of principal–agent models that encompasses dynamic agency models with memory. From a mathematical point of view, we show how contracting problems with Gaussian Volterra outcomes can be transformed into those of semimartingale outcomes by some change of variables to use the martingale optimality principle. Our main contribution is to show that for one-dimensional models, this setting always allows optimal linear contracts in end-of-period observable outcomes with a deterministic optimal level of effort. In higher dimensions, we show that linear contracts are still optimal when the effort cost function is radial, and we quantify the gap between linear contracts and optimal contracts for more general quadratic costs of efforts.

Mots-clés

Principal–agent models; Continuous-time control problems;

Codes JEL

  • C61: Optimization Techniques • Programming Models • Dynamic Analysis
  • C73: Stochastic and Dynamic Games • Evolutionary Games • Repeated Games

Remplace

Eduardo Abi Jaber et Stéphane Villeneuve, « Gaussian Agency problems with memory and Linear Contracts », TSE Working Paper, n° 22-1363, septembre 2022.

Référence

Eduardo Abi Jaber et Stéphane Villeneuve, « Gaussian Agency problems with memory and Linear Contracts », Finance and Stochastics, vol. 29, janvier 2025, p. 143–176.

Publié dans

Finance and Stochastics, vol. 29, janvier 2025, p. 143–176