Abstracts
Wolfgang Achtziger
Constrained nonsmooth optimization by penalty and directed subdifferential
Based on the Banach space of directed sets we consider the directed subdifferential. The definition of the directed subdifferential is straightforward for DC functions and for quasidifferentiable functions, and can be expanded beyond. As a benefit of directed sets, the definition of the directed subdifferential leads to advantageous calculus rules as, e.g., the sum rule as an equation. Furthermore, the application of the so-called visualization apparatus for directed sets leads to necessary and sufficient local optimality conditions for unconstrained nonsmoothoptimization problems. In combination with the standard (exact) l1-penalty approach constrained problems can be considered. Relations to Lagrange duality and saddle point optimality conditions are immediate. The theoretical results are illustrated by examples for which the directed subdifferential can be analytically calculated.
Francisco Facchinei
Ghost penalties in constrained optimization
Ghost penalization is a technique introduced a few years ago in the convergence analysis of a nonconvex, constrained optimization algorithm whereby classical penalty functions are used in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty free. The approach makes a wealth of new results possible. After presenting the basic ideas I will discuss some of these results ranging from complexity ones to applications in stochastic optimization.
Andreas Fischer
A New Problem Qualification for Lipschitzian Optimization
In contrast to a constraint qualification (CQ), a problem qualification may not only rely on the constraints of an optimization problem but also on the objective function and, like a CQ, guarantees that a local minimizer is a Karush-Kuhn-Tucker (KKT) point. With the Subset Mangasarian-Fromovitz Condition (subMFC), a new problem qualification is introduced for Lipschitzian optimization. A comparison of subMFC with several existing qualifications will be presented and reveals for example that subMFC is strictly weaker than quasinormality and can even hold if calmness in the sense of Clarke is violated. The power of the new problem qualification is also demonstrated for optimistic bilevel optimization by means of the lower-level value function reformulation.
This talk is based on joint work with Isabella Käming (TU Dresden) and Alain Zemkoho (University of Southampton).
Matthias Gerdts
Coordination of interacting systems using optimal control
Automation and autonomy becomes more and more important in many robotics applications, especially for mobile robots, which move automatically in a production site, for automated cars, which move in the traffic, or in space robotics. These mobile robots often do not just follow a precomputed reference path, but they need to be able to update their trajectories in order to react on changing environments. This typically requires a feedback control strategy, which takes into account the current state of the robot and the environment. To this end we employ a model-predictive control (MPC) strategy, which requires to solve (discretized) optimal control problems repeatedly. A particular focus of the talk is on methods for the coordination of interacting systems, which are not necessarily cooperative. We investigate suitable solution concepts and embed them into the MPC framework. The first approach uses generalized Nash equilibrium problems, which allow to model the coordination of automated agents without using pre-defined priorities. The second approach couples scheduling tasks with optimal control and leads to a bi-level optimization formulation. Numerical experiments and case studies will be presented to illustrate the methods.
Christoph Helmberg
Conic Bundle Developments
Conic Bundle is a callable library for optimizing sums of convex functions by a proximal bundle method. It has its origins in the spectral bundle approach for large scale semidefinite programming and builds on the idea of using convex function models that may be represented as support functions over suitable conic sets. Cones serve multiple purposes and we illustrate some typical use cases. In bundle methods the key step is to efficiently determine the next candidate via a quadratic bundle subproblem. The conic setting allows to formulate and solve this in a uniform way by interior point methods. The recent development of a preconditioned iterative solver for the associated primal-dual KKT system opens new possibilities for efficient dynamic warm starts so that model updates can be embedded almost directly into the interior point approach. In this, again, the uniform and simple conic structure comes in handy. In a first tentative implementation of these ideas within the callable library ConicBundle the results on randomly generated large scale max-cut and theta-function instances indicate that the approach is competitive and frequently significantly faster than the non dynamic approach.
Michal Kocvara
Conjectures in Real Algebra and Polynomial Optimization through High Precision Semidefinite Programming
We study degree bounds for the denominator-free Positivstellensätze in real algebra, based on sums of squares (SOS), or equivalently the convergence rate for the moment-sums of squares hierarchy in polynomial optimization, from a numerical point of view. As standard semidefinite programming (SDP) solvers do not provide reliable answers in many important instances, we use a new high-precision SDP solver, Loraine.jl, to support our investigation. We study small instances (low-degree, small number of variables) of one-parameter families of examples, and propose several conjectures for the asymptotic behavior of the degree bounds. Our objective is twofold: first, to raise awareness on the bad performance of standard SDP solvers in such examples, and then to guide future research on the Effective Positivstellensätze.
Joint work with Lorenzo Baldi.
Russell Luke
A Linearly Convergent Primal-Dual Strategy for Rank Deficient Convex-Concave Saddle Point Problems
To achieve local linear convergence of first-order methods for structured saddle point problems involving either constraints or merit functions in the image of a linear mapping, it is assumed either implicitly or explicitly that the linear mapping is full rank. We show that a projection onto the space orthogonal to the kernel of the adjoint of the mapping allows one to maintain linear convergence of the usual first-order strategies even in the rank deficient case. To simplify the discussion we use as a template the Proximal Alternating Predictor-Corrector (PAPC) method proposed by Drori, Sabach and Teboulle.
This is joint work with Thao Nguyen and Daniel Schellhorn
Patrick Mehlitz
On convex composite optimization problems and (safeguarded) augmented Lagrangian methods
Back in 1976, Terry Rockafellar proved that the dual update rule of the augmented Lagrangian method (ALM), when applied to a constrained convex optimization problem, corresponds to the application of the proximal point algorithm to the associated dual optimization problem, and this relation can be used to infer primal-dual convergence results for the ALM. In this talk, we first extend these findings to the more general class of convex composite optimization problems. As, nowadays, in numerical optimization, it is common to use so-called safeguarded ALMs, we inspect this class of algorithms from a similar point of view and derive analogous results in the second part of the talk. More precisely, we illustrate that the dual update rule for safeguarded ALMs can be interpreted as a backward-backward splitting method and present some consequences. The analysis shows an undesirable behavior of the dual iterates of safeguarded ALMs whenever the Lagrange multipliers do not belong to the safeguarding set. In order to overcome this problem, in the third part of the talk, we introduce a new safeguarding technique, which allows for enlargements of the safeguarding set, and present associated primal-dual convergence results.
This talk is based on joint work with Alberto de Marchi (Munich, Germany) and Tim Hoheisel (Montréal, Canada).
Peter Ochs
Learning Optimization Algorithms with PAC-Bayesian Guarantees
The change of paradigm from purely model driven to data driven (learning based) approaches has tremendously altered the picture in many applications in Machine Learning, Computer Vision, Signal Processing, Inverse Problems, Statistics and so on. There is no need to mention the significant boost in performance for many specific applications, thanks to the advent of large scale Deep Learning. In this talk, we open the area of optimization algorithms for this data driven paradigm, for which theoretical guarantees are indispensable. The expectations about an optimization algorithm are clearly beyond empirical evidence, as there may be a whole processing pipeline depending on a reliable output of the optimization algorithm, and application domains of algorithms can vary significantly. While there is already a vast literature on "learning to optimize", there is no theoretical guarantees associated with these algorithms that meet these expectations from an optimization point of view. We develop the first framework to learn optimization algorithms with provable generalization guarantees to certain classes of optimization problems, while the learning based backbone enables the algorithms' functioning far beyond the limitations of classical (deterministic) worst case bounds.
J.V. Outrata
On the numerical solution of a class of EPECs via the Gauss-Seidel method for computation of Nash equilibria
The talk is focused on a class of multi-leader multi-follower games for which we use the acronym EPEC. One of the prominent applications of this modeling framework is the deregulated electricity market, where one has to do with only one Follower. Recently, in [2], a variant of the implicit programming approach (ImP) to a class of MPECs has been suggested, based on the usage of a bundle method along with a special semismooth derivative. It seems that this variant might be used in the framework of the Gauss-Seidel method from [1] which, under suitable assumptions, would then converge to a stationary point of the considered EPEC. In this way we intend to enrich the current, rather modest arsenal of numerical techniques, capable to solve some types of EPECs. The approach will be illustrated by academic examples constructed via a modification of Nash-Cournot equilibria, where one firm takes over the role of the follower, selecting its strategy in dependence on the strategies, applied by its concurrents.
Joint work with H. Gfrerer, T. Roubal and J. Valdman
[1] Ch. Kanzow, A.Schwartz: Spieltheorie, Birkhaeuser 2018.
[2] H. Gfrerer, M. Kočvara, J.V. Outrata: On the role of semismoothness in the implicit programming approach to selected nonsmooth optimization problems, arXiv:2412.05953
Marco Sciandrone
A penalty decomposition derivative-free method for the minimization of the finite sum of black-box functions over a convex set
In this talk we focus on the problem of minimizing a smooth function, given as finite sum of (possibly nonconvex) black-box functions, over a convex set. In order to advantageously exploit the structure of the problem, for instance when the terms of the objective functions are partially separable, noisy, costly or with first-order information partially accessible, we propose a framework where the penalty decomposition approach is combined with a derivative-free line search-based method. Under standard assumptions, we state global convergence results. Finally, we show the results of computational experiments.
Oliver Stein
The improvement function in branch-and-bound methods
We present a new branch-and-bound approach for treating optimization problems with nonconvex inequality constraints. It is able to approximate the set of all global minimal points in case of solvability, and also to detect infeasibility. The new technique covers the nonconvex constraints by means of an improvement function which, although nonsmooth, can be treated by standard bounding operations.
The method is shown to be successful under a weak constraint qualification, and we also give a transparent interpretation of the output in case that this condition is violated. Numerical tests illustrate the performance of the algorithm.
Michael Ulbrich
Semismooth Newton methods for inverse homogenization problems
We develop a semismooth Newton method for solving a class of PDE constrained optimization problems that arise in inverse homogenization. The goal is to identify a function q that characterizes spatially varying properties of a microstructure. Via homogenization, this gives rise to an elliptic PDE constraint with an effective coefficient that depends nonlinearly on the function q∈Q. In our case, Q is a Hilbert space, with H1(0,L), being a canonical candidate, that embeds compactly into C([0,L]) and q has to satisfy pointwise lower and upper bounds. We analyze the problem and, building on new Newton differentiability results for the solution operator of bilateral obstacle problems, develop a semismooth Newton method for the problem under consideration.
This is joint work with Constantin Christof and Malte Peter.
Stefan Ulbrich
Generalized derivatives for the solution operator of the obstacle problem and error estimates for numerical approximations
We derive and present error estimates for numerical approximations of a particular Clarke subgradient for reduced objective functions arising in the optimal control of the obstacle problem. The corresponding generalized derivative of the solution operator of the obstacle problem is a solution operator of a Dirichlet problem on the complement of the strictly active set. Using finite element solutions of the obstacle problem, we construct discrete and convergent approximations of this set. To show that our approximations are suitable and convergent, a detailed study of the topological structure of the strictly active set under appropriate assumptions is necessary. Based on the smaller approximation, we solve the Dirichlet problem and obtain an upper bound for the error using the larger approximation. This upper bound converges to zero. We present numerical examples to test our estimates.
Joint work with Anne-Therese Rauls-Ehlert.
Gerd Wachsmuth
A trust-region method for optimal control of ODEs with continuous-or-off controls and TV regularization
A solution algorithm for a special class of optimal control problems subject to an ordinary differential equation is proposed. The controls possess a continuous-or-off structure and are priced by a convex function. Additionally a total variation regularization is applied to penalize switches. Our solution method combines a trust-region method and a proximal gradient method. The subproblems are solved via Bellman’s optimality principle. Convergence with respect to a criticality measure is proven. As a numerical example, we solve a simple optimal control problem involving an SIR model.



