Michael Schuster, On the convergence of optimization problems with kernel density estimated probabilistic constraints
Full Text: PDF
DOI: 10.23952/asvao.7.2025.2.05
Volume 7, Issue 2, 1 August 2025, Pages 209-221
Abstract. Uncertainty plays a significant role in applied mathematics and probabilistic constraints are widely used to model uncertainty in various fields even if probabilistic constraints often demand computational challenges. Kernel density estimation (KDE) provides a data-driven approach for properly estimating probability density functions and efficiently evaluating corresponding probabilities. In this paper, we investigate optimization problems with probabilistic constraints, where the probabilities are approximated using a KDE approach. We establish sufficient conditions under which the solution of the KDE approximated optimization problem converges to the solution of the original problem as the sample size goes to infinity. The main results of this paper include three theorems: (1) For sufficiently large sample sizes, the solution of the original problem is also a solution of the approximated problem, if the probabilistic constraint is passive; (2) The limit of a convergent sequence of solutions of the approximated problems is a solution of the original problem, if the KDE uniformly converges; (3) We provide sufficient conditions for the existence of a convergent sequence of solutions of the approximated problems.
How to Cite this Article:
M. Schuster, On the convergence of optimization problems with kernel density estimated probabilistic constraints, Appl. Set-Valued Anal. Optim. 7 (2025), 209-221.