Workshop

Doctoral Workshop on Decision Mathematics and Statistics

June 6, 2024, 09:00–15:00

Room Auditorium 5

Background and objective 

The objective of the workshop, organized by the Department of Mathematics and Statistics is to foster the interest in the emerging developments in decision mathematics. The workshop provides an opportunity for PhD students, post-doctoral researchers, and faculty members to interact and discuss both theoretical and empirical contributions, with a special focus this year on data science/ML, finance, games, optimization, and statistics. It will consist in 8 talks and will be held on Thursday June 6, 2024, 9:55-16:30 (UTC+2).

Organizing Committee:

Jérôme Bolte, Abdelaati Daouia, Sébastien Gadat, Stéphane Villeneuve 

Conference Secretariat:

Aline Soulié

Anh Dung Le - "Well-posedness of McKean-Vlasov SDEs with density-dependent drift"

In this paper, we study the well-posedness of McKean-Vlasov SDEs whose drift depends pointwisely on marginal density and satisfies a condition about local integrability in time-space variables. The drift is also assumed to be Lipschitz continuous in distribution variable with respect to Wasserstein metric $W_p$. Our approach is by approximation with mollified SDEs. We establish a new estimate about Hölder continuity in time of marginal density. Then we prove that the marginal distributions (resp. marginal densities) of the mollified SDEs converge in $W_p$ (resp. topology of compact convergence) to the solution of the Fokker-Planck equation associated with the SDE of interest. The weak existence of a solution follows from an application of superposition principle. We also prove the strong existence of a solution. The weak and strong uniqueness are obtained in case $p=1$, the drift coefficient is bounded and the diffusion coefficient is distribution free.

Léo Portales - "On the sequential convergence of Lloyd's algorithms"

Lloyd’s algorithm is an iterative method that solves the quantization problem, the task of approximating a target probability measure by a discrete one. It is very widely used in digital applications, where one aims at reducing the dimensionality of a dataset. This method can be interpreted as a gradient descent algorithm. This implies that its accumulation points are critical points of a certain quantization functional, which can be described using the Wasserstein distance and optimal transport. We study the sequential convergence (to a single accumulation point) for two variants of Lloyd’s method: (i) optimal quantization with a weighted empirical measure and (ii) uniform quantization with a uniform empirical measure. For both cases, we prove sequential convergence of the iterates under a definability assumption on the compactly supported continuous target measure: it should have a globally subanalytic density. This includes for example analytic densities truncated to a semi-algebraic set. The argument leverages the log analytic nature of globally subanalytic integrals, the interpretation of Lloyd’s methods as gradient methods and the convergence analysis of gradient algorithms under Kurdyka-Lojasiewicz assumptions. As a by-product, we also obtain definability results for more general semi-discrete optimal transport losses such as transport distances with general costs, the max-sliced Wasserstein distance and the entropy regularized optimal transport distance.

Joseph Hachem - "Extreme value estimation for heterogeneous heavy-tailed data"

In extreme value analysis, it has recently been shown that one can use a de-randomization trick, replacing a random threshold in the estimator of interest with its deterministic counterpart, in order to estimate several extreme risks simultaneously, but only in an i.i.d. context. In this talk, I will show how this method can be used to handle the estimation of several tail quantities (tail index, expected shortfall...) in general dependence/heteroskedasticity/heterogeneity settings, under a weighted $L^1$ assumption on the discrepancy between the average distribution of the data and the prevailing distribution. This technique can also be used to deal with multivariate heterogeneous data, which cannot be handled with current methods.

 Clément Lalanne - "Private learning from users' data"

Learning from users’ data has demonstrated to be tremendously effective at solving many real-world problems. However, such paradigm comes with new challenges such as users’ privacy. Differential Privacy was proposed as a way to bound the information quantity leaked by statistics in order to bound the power of membership tests, hence giving strong guarantees for users’ privacy. Being a rather strong constraint, it comes at a cost on the quality of estimation. A natural question is thus to quantify this cost, and to compare it to the already-existing sampling noise. In this presentation, we will see a generic way to quantify this cost via coupling arguments, and we will illustrate it by examples ranging from simple Bernoulli estimation to nonparametric density estimation.

Marelys Crespo Navas - "Discretisation of Langevin Diffusion in a weak log-concave case"

The Euler discretisation of Langevin diffusion, also known as Unadjusted Langevin Algorithm, is commonly used in machine learning for sampling from a given distribution $\mu \propto e^{-U}$. In this work we investigate a potential $U: \mathbb{R}^d \longrightarrow \mathbb{R}$ which is a weakly convex function and has a Lipschitz gradient. We parameterize the weak convexity with the help of the Kurdyka-Lojasiewicz (KL) inequality, which permits handling of vanishing curvature settings, which is far less restrictive when compared to the simple strongly convex case. We prove that the final horizon of simulation to obtain an $\varepsilon$ approximation (in terms of entropy) is of the order $\varepsilon^{-1} d^{1+2(1+r)^2} \text{Poly}(\log(d),\log(\varepsilon^{-1}))$, where the parameter $r$ is involved in the KL inequality and varies between $0$ (strongly convex case) and $1$ (limiting Laplace situation).

Le Quoc-Tung - "On the geometric and computational complexity of polynomial bilevel optimization"

Bilevel optimization is an important mathematical tool to model phenomena in many domains, such as economic game theory, decision science and machine learning, to name but a few.
Despite its importance, efficient and scalable algorithms for bilevel optimization are mostly developed for the (strong) convexity of the lower-level problem case, which is unrealistic for many practical tasks. In the quest to understand more general bilevel problems, we relax the lower level strong convexity and consider polynomial bilevel optimization, i.e., polynomial objective functions and constraints. We focus on the worst-case analysis of this class of problems, from both geometric and computational viewpoints. Our analysis suggests that even the algebraic rigidity of polynomials does not exclude extreme pathologies induced by the bilevel optimization. More specifically, we demonstrate that any semi-algebraic function can be represented as the objective of a polynomial bilevel problem. This discovery implies that solving polynomial bilevel optimization is equivalent to optimizing general semi-algebraic functions. We obtain other sharp variations of this result by considering relevant properties of the lower problem, such as convexity or feasible set compacity. In addition, we show the $\Sigma_2^p$-hardness of polynomial bilevel optimization, characterizing polynomial bilevel problems as vastly more challenging than NP-complete problems (under reasonable hardness assumptions).

Ashot Aleksian - "Metastable behavior of measure-dependent stochastic processes"

In this presentation, I will provide a brief introduction to the topic of metastability, which occurs when a dynamical system lingers around an equilibrium position before transitioning to another. I will explore the formulation of this problem in the case of measure-dependent processes, examine the theoretical advancements in this area, and present relevant simulations. Lastly, I will share my own contributions to the field, including a preprint created in collaboration with Julian Tugaut, as well as recent research conducted during my postdoctoral studies under the supervision of Laurent Miclo and Stéphane Villeneuve.

Jinghua Duan - "Optimal stopping with uncertainty in drift parameters"

This paper studies the optimal stopping problem where the underlying diffusion has uncertainty in its drift parameter. The diffusion process will lose its drift part if it exceeds some random threshold. I show that in that case the stopping problem can be transformed into a two-dimensional Markovian stopping problem. I give the stopping region and prove it is optimal using verification argument. This model can be used to analyze the investment behavior of firms when they are facing regulation uncertainty. Overall, it extends the analysis of firm’s investment behavior with real option model.
 

See also