LaTex2Web logo

Documents Live, a web authoring and publishing system

If you see this, something is wrong

Collapse and expand sections

To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.

Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.

Cross-references and related material

Generally speaking, anything that is blue is clickable.

Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.

Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.

Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.

Discussions

By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.

If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.

Table of contents

First published on Wednesday, Jul 2, 2025 and last modified on Wednesday, Jul 2, 2025 by François Chaplais.

Robust Feedback Optimization with Model Uncertainty: A Regularization Approach
arXiv
Published version: 10.48550/arXiv.2503.24151

Winnie Chan Automatic Control Laboratory, ETH Zürich, Switzerland Email

Zhiyu He Automatic Control Laboratory, ETH Zürich, Switzerland and Max Planck Institute for Intelligent Systems, Tübingen, Germany Email and Email

Keith Moffat Automatic Control Laboratory, ETH Zürich, Switzerland Email

Saverio Bolognani Automatic Control Laboratory, ETH Zürich, Switzerland Email

Michael Muehlebach Max Planck Institute for Intelligent Systems, Tübingen, Germany Email

Florian Dörfler Automatic Control Laboratory, ETH Zürich, Switzerland Email

Abstract

Feedback optimization optimizes the steady state of a dynamical system by implementing optimization iterations in closed loop with the plant. It relies on online measurements and limited model information, namely, the input-output sensitivity. In practice, various issues including inaccurate modeling, lack of observation, or changing conditions can lead to sensitivity mismatches, causing closed-loop sub-optimality or even instability. To handle such uncertainties, we pursue robust feedback optimization, where we optimize the closed-loop performance against all possible sensitivities lying in specific uncertainty sets. We provide tractable reformulations for the corresponding min-max problems via regularizations and characterize the online closed-loop performance through the tracking error in case of time-varying optimal solutions. Simulations on a distribution grid illustrate the effectiveness of our robust feedback optimization controller in addressing sensitivity mismatches in a non-stationary environment.

Acknowledgement: this work was supported by the Max Planck ETH Center for Learning Systems, the SNSF via NCCR Automation (grant agreement 51NF40 80545), and the German Research Foundation.

1 Introduction

Modern engineering systems are increasingly complex, large-scale, and variable, as seen in power grids, supply chains, and recommender systems. Achieving optimal steady-state operation of these systems is both critical and challenging. In this regard, numerical optimization pipelines operate in an open-loop manner, whereby solutions are found based on an explicit formulation of the input-output map of the system and knowledge of disturbances. However, the reliance on accurate models poses restrictions and renders these pipelines unfavorable in complex environments.

Feedback optimization is an emerging paradigm for steady-state optimization of a dynamical system [1, 2, 3]. At the heart of feedback optimization is the interconnection between an optimization-based controller and a physical system. This closed-loop approach shares a similar spirit to extremum seeking[4], modifier adaptation[5], and real-time iterations[6]. Nonetheless, feedback optimization effectively handles high-dimensional objectives and coupling constraints, adapts to non-stationary conditions, and entails less computational effort (see review in [2]).

Thanks to the iterative structure that incorporates real-time measurements and performance objectives, feedback optimization enjoys closed-loop stability[7], optimality[8, 9], constraint satisfaction[10], and online adaptation[11, 12, 13, 14, 15]. However, these salient properties rely on limited model information, i.e., the input-output sensitivity of a system. This requirement follows from using the chain rule to construct gradients in iterative updates. In practice, different issues can render the sensitivity inaccurate or elusive, e.g., corrupted data, lack of measurements, or changing conditions. As we will show in Section 2.2, such sensitivity errors can accumulate in the closed loop and cause significant sub-optimality or even divergence.

Many approaches have been developed to address inexact sensitivities in feedback optimization. A major stream leverages model-free iterations, where controllers entirely bypass sensitivities. Such model-free operations are typically enabled by derivative-free optimization, including Bayesian[16, 17, 18] and zeroth-order optimization[19, 20, 21, 22, 23]. However, controllers based on Bayesian optimization tend to be computationally expensive for high-dimensional problems, whereas zeroth-order feedback optimization brings increased sample complexity. Therefore, it is desirable to incorporate structural, albeit inexact, sensitivity information into controller iterations rather than discard it altogether.

There are two primary solutions to handle model uncertainty without resorting to model-free iterations: adaptation and robustness. In the context of feedback optimization, adaptive schemes leverage offline or online data to refine knowledge of sensitivities, thereby facilitating closed-loop convergence. Examples include learning sensitivity via least squares[24, 25] or stochastic approximation[26], as well as constructing behavioral representations of sensitivity from input-output data[27]. However, adaptive strategies impose additional requirements for data, computation, and estimation. Restrictions arise in scenarios involving high-dimensional systems and limited computational power, where sensitivity estimation can be challenging.

In this paper, we consider robust feedback optimization, where the closed-loop performance is optimized given the worst-case realization of the sensitivity in some uncertainty sets. This is formalized as a min-max problem for which tractable reformulations via regularization are further provided. Our robust feedback optimization controllers feature provable convergence guarantees for time-varying problems with changing disturbances and references. Compared to the above adaptive schemes, our controllers only leverage an inexact sensitivity and hence are easy to implement. In contrast to related robust strategies in learning[28, 29] and data-driven control[30], we tackle a more demanding setting wherein model uncertainty is intertwined with both system dynamics and controller iterations. Our main contributions are as follows.

  • We formulate robust feedback optimization by addressing structured uncertainties in sensitivities. We provide tractable reformulations via regularization and build connections with lasso and ridge regression.
  • We present online robust feedback optimization controllers that address two types of sensitivity uncertainty sets. We establish closed-loop convergence by characterizing errors in tracking trajectories of time-varying optimal solutions.
  • Through a numerical experiment of voltage regulation in a distribution grid, we demonstrate that the proposed controllers preserve voltage stability while prescribing less curtailment and reactive power control, even with inaccurate sensitivities.

The rest of this paper is organized as follows. Section 2 motivates and presents the problem setup. Section 3 provides tractable reformulations and our robust feedback optimization controllers. The closed-loop performance guarantee is established in Section 4, followed by numerical evaluations on a distribution grid in Section 5. Finally, Section 6 concludes the article and discusses future directions.

2 Background and Problem Formulation

2.1 Preliminaries

We consider the following dynamical system

\[ \begin{equation} \begin{split} x_{k+1} &= Ax_k + Bu_k + d_{x, k}, \\ y_k &= Cx_k + d_{y, k}, \end{split} \end{equation} \]

(1)

where \( x_k \in \mathbb{R}^n\) , \( u_k \in \mathbb{R}^m\) , \( y_k \in \mathbb{R}^p\) , \( d_{x,k} \in \mathbb{R}^n\) , and \( d_{y,k} \in \mathbb{R}^p\) denote the state, input, output, exogenous disturbance, and measurement noise at time \( k\) , respectively. Further, \( A \in \mathbb{R}^{n \times n}\) , \( B \in \mathbb{R}^{n \times m}\) , and \( C \in \mathbb{R}^{p \times n}\) are system matrices. We focus on a stable system, i.e., the spectral radius \( \rho(A)\) of \( A\) in (1) is less than \( 1\) . In practice, this condition also holds if this system is prestabilized. Given fixed inputs and disturbances (i.e., \( u_k = u, d_{x,k} = d_x, d_{y,k} = d_y, \forall k \in \mathbb{N}\) ), system (1) admits a unique steady-state output

\[ \begin{equation} \begin{split} y_{\textup{ss}}(u,d) &= Hu + d, \\ H &\triangleq C(I-\!A)^{-1}B, ~ d \triangleq C(I-\!A)^{-1}d_x + d_y. \end{split} \end{equation} \]

(2)

In (2), \( H \in \mathbb{R}^{p \times m}\) is the sensitivity matrix of system (1).

A performance objective characterizing the input-output performance of system (1) at each time \( k \in \mathbb{N}\) is

\[ \begin{equation} \begin{split} \Phi_k(u; d_k, r_k) &= \|u\|^2_R + \lambda \|y_{\textup{ss}}(u, d_k) - r_k\|^2_Q \\ &= \|u\|^2_R + \lambda \|H u + d_k - r_k\|^2_Q, \end{split} \end{equation} \]

(3)

where \( R \in \mathbb{R}^{m \times m}\) and \( Q \in \mathbb{R}^{p \times p}\) are positive semidefinite matrices, \( \|u\|_R = \sqrt{u^\top R u}\) and \( \|y\|_Q = \sqrt{y^\top Q y}\) denote weighted norms, \( \lambda \geq 0\) is a weight parameter, and \( r_k \in \mathbb{R}^p\) is the reference at time \( k\) . Further, \( y_{\textup{ss}}(u, d_k) = Hu + d_k\) is the steady-state output associated with the input \( u\) and the disturbance \( d_k \triangleq C(I-\!A)^{-1}d_{x,k} + d_{y,k}\) at time \( k\) . The function (3) penalizes the input cost and the difference between the steady-state output and the reference.

To optimize (3), numerical solvers requires an explicit knowledge of the map \( y_{\textup{ss}}\) as per (2) with an accurate value of the disturbance \( d\) , which can be restrictive in applications. In contrast, feedback optimization leverages real-time output measurements and the limited model information, namely, the sensitivity matrix \( H\) , thus steering system (1) to optimal operating conditions [1, 2].

2.2 Example: Detrimental Effects of Inexact Sensitivities

Many practical issues including lack of measurements, corrupted data, and changing conditions cause model uncertainty, i.e., sensitivity errors[2]. We present a motivating example to show how such errors invalidate feedback optimization by inducing closed-loop sub-optimality or instability. While this example is synthetic, we observe similar phenomena in realistic power grid simulations (see Section 5).

We consider a system abstracted by the steady-state map (2) with fixed disturbances. We generate inexact sensitivities \( \hat{H}\) in the following two fashions.

  • We fix the size of \( H\) (i.e., \( H \in \mathbb{R}^{3 \times 3}\) ) and add constant perturbations of different intensities. Specifically, \( \hat{H} = H + \sigma \Delta_H\) , where \( \sigma \geq 0\) , and the elements of \( \Delta_H\) follow uniform distributions.
  • We vary the order of a square sensitivity \( H\) from \( 1\) to \( 7\) and add perturbation noise with fixed norms, i.e., \( \hat{H} = H + \Delta_H\) . The square of each element of \( \Delta_H\) satisfies a Dirichlet distribution, ensuring \( \lVert \Delta_H \rVert_F = 1\) .

To optimize (3), consider the following feedback optimization controller with an inexact \( \hat{H}\)

\[ \begin{equation} u_{k+1} = u_k - 2\eta \left(R u_k + \lambda \hat{H}^{\top} Q (y_k - r_k) \right), \end{equation} \]

(4)

where \( \eta > 0\) is the step size. The update (4) follows a gradient descent iteration given the objective (3), using the real-time output measurement \( y_k\) of (1) to replace the steady-state output \( H u_k + d_k\) . Figure 1 illustrates the closed-loop performance when the controller (4) is applied to the system (1). We observe from Figure 2 that larger errors in sensitivities cause increased sub-optimality. Furthermore, Figure 3 demonstrates that when \( \eta\) is fixed, such a negative effect becomes more pronounced when the problem dimension grows.

Use inexact sensitivities with varying perturbations.
Figure 2. Use inexact sensitivities with varying perturbations.
Use inexact sensitivities of different sizes.
Figure 3. Use inexact sensitivities of different sizes.
Figure 1. Closed-loop performance when the controller (4) with inexact sensitivities is interconnected with the system (2)

2.3 Problem Formulation

Motivated by the above observations, we pursue robust feedback optimization, where we optimize a worst-case performance objective given any realization of sensitivity lying in uncertainty sets. In practice, we can obtain through prior knowledge or identification[15, 2] an inexact sensitivity \( \hat{H}\) , which differs from the true sensitivity \( H\) of (1) by \( \Delta_H\) , i.e., \( \hat{H} + \Delta_H = H\) . In view of \( \hat{H}\) and the uncertainty \( \Delta_H\) , our robust formulation is

\[ \begin{equation} \min_{u \in \mathbb{R}^m} \max_{\Delta_H \in \mathcal{D}} ~ \lVert u \rVert ^2_R + \lambda \lVert (\hat{H} + \Delta_H) u + d_k - r_k \rVert ^2_Q, \end{equation} \]

(5)

where \( \mathcal{D} \subset \mathbb{R}^{p \times m}\) is the uncertainty set wherein \( \Delta_H\) lies, \( d_k = C(I-\!A)^{-1}d_{x,k} + d_{y,k}\) aggregates the disturbances \( d_{x,k}\) and \( d_{y,k}\) , and \( r_k\) is the reference at time \( k\) . Different from (3), in (5) we robustify the steady-state specification of system (1) against the sensitivity uncertainty \( \Delta_H\) . Essentially, (5) implies minimizing the steady-state input-output performance for the worst-case realization of sensitivity. We examine the following types of uncertainty sets.

  • Generalized uncertainties described by

    \[ \begin{equation} \mathcal{D}_{\text{gen}} \triangleq \left\{\Delta_H \big| \lVert \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H \rVert _F \leq \varrho_{\text{gen}} \right\}, \end{equation} \]

    (6)

    where \( \varrho_{\text{gen}} \geq 0\) , and \( \|\cdot\|_F\) denotes the Frobenius norm.

  • Uncorrelated column-wise uncertainties of the form

    \[ \begin{align} \mathcal{D}_{\text{col}} \triangleq \big\{\Delta_H \big| & \lVert (\lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H)_i \rVert \leq (\varrho_{\text{col}})_i, \notag \\ &\forall i \in \{1, \cdots, m \} \big\}, \end{align} \]

    (7)

    where \( (\lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H)_i\) denotes the \( i\) -th column of the matrix \( \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H\) and \( (\varrho_{\text{col}})_i\) denotes the \( i\) -th element of the vector \( (\varrho_{\text{col}}) \in \mathbb{R}^m\) , with \( (\varrho_{\text{col}})_i \geq 0\) .

In the above sets, \( \mathcal{D}_{\text{gen}}\) poses a bounded-norm restriction on the uncertainty \( \Delta_H\) . In contrast, \( \mathcal{D}_{\text{col}}\) bounds the norm of each column of \( \Delta_H\) , which is useful when different levels of confidence exist regarding how each component of \( u\) affects the output \( y\) . Both types are common in the robust optimization literature [29, 28, 30].

While problem (5) is unconstrained, we will discuss strategies to handle input and output constraints at the end of Section 3.2. We consider quadratic objectives in (5) to highlight intuition and facilitate the presentation of robust strategies. Promising extensions to handle general objectives can be built on modern advances in robust optimization [31].

3 Robust Feedback Optimization

3.1 Tractable Reformulations

We provide tractable reformulations of problem (5), thereby facilitating the design of robust feedback optimization controllers. Let \( u_k^* \in \mathbb{R}^m\) be the optimal point of problem (5) at time \( k\) . We analyze two cases involving the uncertainty sets discussed in Section 2.3.

\( \bullet\) The case with generalized uncertainties

For problem (5) with the uncertainty set \( \mathcal{D} = \mathcal{D}_{\text{gen}}\) (see (6)), the reformulated problem is

\[ \begin{equation} \min_{u \in \mathbb{R}^m} ~ \Phi_{k,\ell_2}(u) \triangleq \lVert u \rVert ^2_R + \lambda \lVert \hat{H} u + d_k - r_k \rVert ^2_Q + \rho_{\textup{gen}} \lVert u \rVert ^2. \end{equation} \]

(8)

where the regularizer satisfies

\[ \begin{equation} \rho_{\text{gen}} = \left\{ \begin{array}{ll} \varrho_{\text{gen}} \lVert M u_k^* + d_k - r_k \rVert / \|u_k^*\|, & \text{if } M u^k_* + d_k - r_k \neq 0, \\ \varrho_{\text{gen}} / \|u_k^*\|, & \text{otherwise}. \end{array}\right. \end{equation} \]

(9)

\( \bullet\) The case with uncorrelated column-wise uncertainties

The reformulated problem associated with (5) involving \( \mathcal{D} = \mathcal{D}_{\text{col}}\) (see (7)) is

\[ \begin{equation} \min_{u \in \mathbb{R}^m} ~ \Phi_{k,\ell_1}(u) := \lVert u \rVert ^2_R + \lambda \lVert \hat{H} u + d_k - r_k \rVert ^2_Q + \rho_{\textup{col}}^{\top} \lvert u \rvert, \end{equation} \]

(10)

where \( |u| \in \mathbb{R}^m\) denotes the component-wise absolute value of \( u\) , and the regularizer \( \rho_{\textup{col}} \in \mathbb{R}^m\) satisfies

\[ \begin{equation} \rho_{\textup{col}} = \left\{ \begin{array}{ll} 2 \lVert M u_k^* + \varepsilon \rVert \varrho_\textup{col}, & \text{if } M u_k^* + \varepsilon \neq 0, \\ \varrho_{\textup{col}}, & \text{otherwise}. \end{array}\right. \end{equation} \]

(11)

The following theorem establishes that the above reformulated problems share the same optimal points as problem (5).

Theorem 1

Problem (5) with the uncertainty set \( \mathcal{D} = \mathcal{D}_{\textup{gen}}\) in (6) and problem (8) share the same optimal point. Moreover, problem (5) with \( \mathcal{D} = \mathcal{D}_{\textup{col}}\) in (7) and problem (10) attain the same optimal point.

Proof

The objective of (5) can be compactly written as

\[ \begin{equation} \lVert (M + \Delta_{M}) u + d_k - r_k \rVert ^2, \end{equation} \]

(12)

where \( M = \big[R^{\frac{1}{2}}; \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \hat{H}\big] \in \mathbb{R}^{(m+p) \times m}\) , and \( \Delta_M = \big[0; \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H\big] \in \mathbb{R}^{(m+p) \times m}\) . Since the norm is non-negative, optimizing (12) is equivalent to optimizing \( \lVert (M + \Delta_{M}) u + d_k - r_k \rVert\) . When \( \mathcal{D} = \mathcal{D}_{\text{gen}}\) , we perform an analysis similar to that in [32, Theorem 3.1] and obtain

\[ \max_{\lVert \Delta_{M} \rVert_F \leq \varrho_{\text{gen}}} \lVert (M + \Delta_{M}) u + d_k - r_k \rVert = \|M u + d_k - r_k\| + \varrho_{\text{gen}} \|u\|. \]

Moreover, the following two problems

\[ \begin{align*} \min_{u \in \mathbb{R}^m} ~ &\lVert M u + d_k - r_k \rVert + \varrho_{\text{gen}} \|u\|, \\ \min_{u \in \mathbb{R}^m} ~ &\lVert M u + d_k - r_k \rVert ^2 + \rho_{\text{gen}} \lVert u \rVert ^2, \end{align*} \]

share the same optimal point \( u_k^*\) if the condition (9) holds. This result can be proved by comparing the optimality conditions of both problems and noting that \( 0\) is a subgradient of \( \|u\|\) at \( u=0\) .

We proceed to analyze the case when \( \mathcal{D} = \mathcal{D}_{\text{col}}\) . Analogous to [28, Theorem 1], we obtain

\[ \max_{\substack{\lVert (\Delta_{M})_i \rVert \leq (\varrho_{\textup{col}})_i \\ \forall i \in \{1, \dots, m\}}} \lVert (M + \Delta_{M}) u + d_k - r_k \rVert = \lVert M u + d_k - r_k \rVert + \varrho_{\textup{col}}^{\top} |u|. \]

Furthermore, the optimal points of the following problems

\[ \min_{u \in \mathbb{R}^m} ~ \lVert M u + \varepsilon \rVert + \varrho_{\textup{col}}^{\top} |u| ~ \text{and} ~ \min_{u \in \mathbb{R}^m} ~ \lVert M u + \varepsilon \rVert ^2 + \rho_{\textup{col}}^{\top} |u| \]

coincide when the regularizer satisfies the condition (11).

The \( \ell_2\) -regularizer in (8) and the \( \ell_1\) -regularizer in (10) admit the same interpretation as those in classical ridge and lasso regression[29]. In essence, these regularization terms penalize the magnitude of the input, helping to achieve closed-loop stability in the face of model uncertainty. The use of the \( \ell_1\) -regularizer also promotes sparsity of control inputs.

Remark 1

While the expressions of \( \rho_\textup{gen}\) and \( \rho_{\textup{col}}\) involve \( \|u_k^*\|\) , this dependence arises from the quadratic objective in (5) and the related step of equivalent reformulation. In practice, for a variation of (5) with non-squared \( \ell_2\) -norms, the regularizers in the reformulated problems will only entail the uncertainty bounds \( \varrho_{\text{gen}}\) and \( \varrho_{\text{col}}\) but not \( \|u_k^*\|\) .

3.2 Design of Robust Feedback Optimization Controllers

Based on the reformulations in Section 3.1, we present our online robust feedback optimization controllers. These controllers leverage an inexact sensitivity \( \hat{H}\) and real-time output measurements of system (1). They employ optimization-based iterations, thereby driving the system to operating points characterized by (8) or (10).

For problem (8) corresponding to the case with generalized uncertainties, our robust feedback optimization controller employs the following gradient-based update

\[ \begin{equation} u_{k+1} = u_k - 2\eta \left(R u_k + \lambda \hat{H}^{\top} Q (y_k - r_k) + \rho_{\text{gen}} u_k\right), \end{equation} \]

(13)

where \( \eta > 0\) is the step size. The update direction of the controller (13) is related to the negative gradient of (8) at \( u_k\) . Further, (13) uses the output measurement \( y_k\) of the true system (1) as feedback.

Problem (10), associated with the case with uncorrelated column-wise uncertainties, involves a nonsmooth regularizer \( \|u\|\) . Hence, building on proximal gradient descent, the proposed controller updates as follows:

\[ \begin{equation} u_{k+1} = \rm{\operatorname{prox}}_{\eta \rho_{\textup{col}}} \big(u_k - 2\eta (R u_k + \lambda \hat{H}^{\top} Q (y_k - r_k) \big), \end{equation} \]

(14)

where \( \eta > 0\) is the step size. In (14), \( \rm{\operatorname{prox}}_{\eta \rho_{\textup{col}}}(u) \triangleq \arg\min_{u' \in \mathbb{R}^m} \eta \rho_{\textup{col}}^\top |u| + \frac{1}{2}\|u' - u\|^2\) denotes the proximal operator of \( \eta \rho_{\textup{col}}^\top |u|\) , i.e., soft thresholding \( \rm{\operatorname{sgn}}(u_i) \max\{|u_i| - \eta (\rho_{\textup{col}})_i\}\) , where \( \rm{\operatorname{sgn}}(\cdot)\) is the sign function. Similar to (13), this controller uses the real-time output \( y_k\) of system (1) and iteratively calculates new inputs.

We further discuss various extensions for the proposed controllers (13) and (14). In practice, restrictions on the input due to actuation limits or economic conditions can often be represented as a constraint set \( \mathcal{U} \subset \mathbb{R}^m\) . In this regard, we can project \( u_k\) generated by (13) and (14) back to \( \mathcal{U}\) , thereby satisfying constraint satisfaction at every time step. Should output constraints be imposed e.g. from safety requirements, we can augment the objectives in (8) and (10) with suitable penalty (e.g., quadratic or log-barrier) functions and incorporate the resulting derivative terms into the updates (13) and (14), see also [2, Section 2.4].

4 Performance Guarantee

We present the performance guarantee of the closed-loop interconnection between system (1) and our robust feedback optimization controller. A major challenge is that sensitivity uncertainty is interlaced with system dynamics and controller iterations, complicating convergence analysis. To address this challenge, we analyze the coupled evolution of the system (1) and the proposed controller, while characterizing the cumulative effects of sensitivity uncertainty.

Recall that \( u_k^*\) is the optimal point of problem (5) at time \( k\) , and that \( d_k \triangleq C(I-\!A)^{-1}d_{x,k} + d_{y,k}\) aggregates the disturbances. We consider stable system (1), i.e., \( \rho(A) < 1\) . Let \( x_{\textup{ss},k} \in \mathbb{R}^n\) be the steady state of (1) induced by \( u_{k}\) and \( d_{x,k}\) . In other words, \( x_{\textup{ss},k} = Ax_{\textup{ss},k} + Bu_{k} + d_{x,k}\) , implying \( x_{\textup{ss},k} = (I-A)^{-1}(Bu_{k} + d_{x,k})\) . For any given positive definite \( \bar{Q} \in \mathbb{R}^{n \times n}\) , there exists a unique positive definite \( P \in \mathbb{R}^{n \times n}\) satisfying the Lyapunov equation \( A^\top P A- P + \bar{Q} = 0\) . Let \( \|x\|_P \triangleq \sqrt{x^\top P x}\) be the weighted norm and \( \lambda_{\max}(P)\) be the maximum eigenvalue of \( P\) . Our performance guarantee is as follows.

Theorem 2

Let system (1) be stable. There exists \( \eta^* > 0\) such that for any \( \eta \in (0, \eta^*]\) , the closed-loop interconnection between (1) and the controller (13) or (14) guarantees

\[ \begin{align} \left \lVert \begin{bmatrix} \|u_{k} - u_{k}^*\| \\ \lVert x_{k} - x_{\textup{ss},k} \rVert_P \end{bmatrix} \right \rVert \leq& r_1 (c_M)^k \left \lVert \begin{bmatrix} \|u_0 - u_0^*\| \\ \lVert x_0 - x_{\textup{ss},0} \rVert_P \end{bmatrix} \right \rVert \notag \end{align} \]

(15)

\[ \begin{align} &+ r_2 \frac{c_M}{1 - c_M} \left \lVert \sup_{i \in [k]} q_i \right \rVert, \\\end{align} \]

(16)

where \( r_1, r_2 > 0\) , and \( c_M \in [0, 1)\) . Moreover,

\[ q_k \triangleq \begin{bmatrix} \eta \bar{c}_1 \|H-\hat{H}\| \|u_{k}\| + \lVert u_{k+1}^* - u_k^* \rVert \\ \eta c_3 \|H - \hat{H}\|\|u_k\| + c_4 \|d_{k+1} - d_{k}\| + \eta c_5 \end{bmatrix}, \]

where the constants are \( \bar{c}_1 = 2\lambda \|\hat{H}^\top Q\|\) , \( c_3 = 2 \lambda \|(I-\!A)^{-1}B\| \lambda_{\max}(P) \|\hat{H}^\top Q\|\) , \( c_4 = \lambda_{\max}(P)\|(I-\!A)^{-1}\|\) , and

\[ c_5 = \left\{ \begin{array}{ll} 0, & \textup{for } (13), \\ 2 \lambda_{\max}(P) \|(I-A)^{-1}B\| \|\rho_{\textup{col}}\|, & \textup{for } (14). \end{array}\right. \]

Proof

The proof is provided in Section 6.2.

In Theorem 2, we characterize the closed-loop performance through the joint evolution of the distance \( \|u_{k} - u_{k}^*\|\) to the optimal point \( u_k^*\) and the distance \( \|x_{k} - x_{\textup{ss},k}\|\) to the steady state \( x_{\textup{ss},k}\) . The upper bound (15) is in the flavor of input-to-state stability [33] and similar to [12, 10, 14, 13]. In contrast to these works, we additionally characterize in (15) the cumulative effects of the given sensitivity uncertainty (i.e., \( \|H - \hat{H}\|\) ) and the regularizer corresponding to the uncertainty set (i.e., \( \rho_{\textup{col}}\) ). The effect of the initial conditions \( u_0\) and \( x_0\) vanishes exponentially fast, since \( c_M \in (0,1)\) . The asymptotic error is proportional to the shifts of optimal solutions \( u_k^*\) and disturbances \( d_k\) , as well as the sensitivity uncertainty, i.e., \( \|H - \hat{H}\|\) . The influence arising from this uncertainty can be tuned via the step size, see the terms in \( q_k\) . It is possible to further establish upper bounds on the distance of the output to the optimal steady-state output through the Lipschitz property of the dynamics (1).

5 Numerical Experiments

We present a case study in a distribution grid to showcase the effectiveness of our robust feedback optimization controllers. Specifically, we consider real-time voltage regulation while minimizing active power curtailment and reactive power actuation. Our goal is to show that robustification is effective beyond an academic setting for theoretical guarantees and can address practical challenges such as nonlinear steady states and state-dependent sensitivities.

Standard feedback optimization
Figure 5. Standard feedback optimization
Robust feedback optimization (eqn:l2-step)
Robust feedback optimization (13)
Robust feedback optimization (eqn:l1-step)
Robust feedback optimization (14)
Figure 4. Real-time voltage control for a distribution grid after the topology change.

Consider a distribution grid with \( n \in \mathbb{N}\) photovoltaic inverters. Let \( p_{i,k}\) , \( q_{i,k}\) , \( p^{\textup{MPP}}_{i, k}\) , and \( v_{i,k}\) denote the active power, reactive power, maximum power point, and voltage of inverter \( i\) at time \( k\) , respectively, where \( i=1,…,n\) . Let \( u_{i,k} \triangleq [p_{i,k} - p^{\text{MPP}}_{i, k}, q_{i,k}]\) be the variable of inverter \( i\) . Let \( u_k = [u_{1,k}, …, u_{n,k}]\) and \( v_k = [v_{1,k}, …, v_{n,k}]\) be concatenated variables. Further, \( d_k\) represents the load at time \( k\) . The nonlinear map from \( u_k\) and \( d_k\) to \( v_k\) is given by the power flow solver [34]. We aim to regulate grid voltage and minimize renewable energy curtailments and reactive power actuation. This is formalized by the following problem

\[ \begin{equation} \begin{split} \min_{u_k} ~ & \lVert u_k \rVert ^2_R + \lambda \lVert v_k - r_k \rVert ^2_Q \\ \textrm{s.t.} ~ & u_{i,k} \in \mathcal{U}_{i,k} ~ \forall i = 1, \cdots, m, \end{split} \end{equation} \]

(17)

where \( R \in \mathbb{R}^{2n \times 2n}\) and \( Q \in \mathbb{R}^{n \times n}\) are positive definite cost matrices, \( \mathcal{U}_{i,k} \triangleq \{ [p_i, \, q_i] : 0 \leq p_i \leq p^{\text{MPP}}_{i, k}, q_{\min} \leq q_i \leq q_{\max} \}\) is the constraint set, and \( q_{\min}\) and \( q_{\max}\) are lower and upper bounds on reactive actuation, respectively.

We adopt the UNICORN 56-bus test case [35] with 5 photovoltaic inverters. Although the input-output sensitivity is a nonlinear function of \( u_k\) , we learn a constant approximation \( \hat{H}\) based on power flow linearization and historical data of the injected powers and voltages. This sensitivity becomes even more inexact when the distribution grid experiences a topology change, specifically when the point of common coupling is switched from bus 1 to bus 26. While the uncertainty set for \( \hat{H}\) is hard to characterize correctly, we tune the regularizers of (13) and (14) by gradually decreasing their values from conservative upper bounds. We augment the standard feedback optimization controller (4) and the proposed controllers (13) and (14) with projection to \( \mathcal{U}_{i,k}\) , use the same step size for these controller, and apply them to the changed grid.

As shown in the first sub-figure of Figure 5, when implemented in a new environment with sensitivity uncertainty, standard feedback optimization causes oscillations and voltage violation. Note that the dashed lines in the sub-figures on the first row denote the maximum and minimum voltage limits, which equal \( 1.1\) p.u. and \( 0.9\) p.u., respectively. This standard controller also requires large reactive power actuation. In contrast, robust feedback optimization controllers maintain voltage stability after the point of common coupling changes. This is achieved by conservatively regulating control inputs, a consequence of regularization in the face of uncertainty. Furthermore, as shown in the sub-figures on the last row, the proposed controllers lead to less active power curtailments compared to the standard approach. Overall, robust feedback optimization effectively handles model uncertainty in this example of real-time voltage regulation.

6 Conclusion

We addressed steady-state optimization of a dynamical system subject to model uncertainty by presenting robust feedback optimization, which seeks optimal closed-loop performance given all possible sensitivities falling within bounded uncertainty sets. Tractable reformulations for this min-max problem via regularized steady-state specifications were then provided. We also showcased the adaptation and robustness of our controllers through theoretical tracking guarantees and numerical experiments on a distribution grid. Future avenues include tuning regularizers via differentiable programming, incorporating regularization into online sensitivity learning, and pursuing robustness against model uncertainty for nonlinear stable systems.

Appendix

6.1 Supporting Lemmas

We provide two lemmas that quantify the state dynamics (1) and the controller iterations (13) and (14).

Lemma 1

Let the conditions of Theorem 2 hold. The state dynamics (1) ensure

\[ \begin{align} \lVert &x_{k+1} - x_{\textup{ss},k+1} \rVert_P \leq c_1 \lVert x_{k} - x_{\textup{ss},k} \rVert_P + \eta c_2 \|u_k - u_k^*\| \notag \end{align} \]

(18)

\[ \begin{align} &+ \eta c_3 \|H - \hat{H}\|\|u_k\| + c_4 \|d_{k+1} - d_{k}\| + \eta c_5, \\\end{align} \]

(19)

where \( c_1,c_2,c_3,c_4\) , and \( c_5\) are constants specified in (24).

Proof

We start by analyzing the dynamics (1). Recall that \( x_{\textup{ss},k} = (I-A)^{-1}(Bu_{k} + d_{x,k})\) is the steady state of (1) induced by \( u_k\) and \( d_{x,k}\) . Let \( L_x^u \triangleq \|(I-A)^{-1}B\|\) and \( L_x^d \triangleq \|(I-A)^{-1}\|\) be the Lipschitz constants of \( x_{\textup{ss},k}\) with respect to \( u\) and \( d\) , respectively. Since \( \rho(A) < 1\) , for any given positive definite \( \bar{Q} \in \mathbb{R}^{n \times n}\) , there exists a unique positive definite \( P \in \mathbb{R}^{n \times n}\) satisfying the Lyapunov equation \( A^\top P A- P + \bar{Q} = 0\) . Therefore,

\[ \begin{align} \lVert &x_{k+1} - x_{\textup{ss},k+1} \rVert_P \stackrel{(a)}{\leq} \lVert x_{k+1} - x_{\textup{ss},k} \rVert_P + \lVert x_{\textup{ss},k+1} - x_{\textup{ss},k} \rVert_P \notag \end{align} \]

(20)

\[ \begin{align} &\stackrel{(b)}{\leq} \sqrt{1 - \gamma}\|x_{k} - x_{\textup{ss},k}\|_P + \lVert x_{\textup{ss},k+1} - x_{\textup{ss},k} \rVert_P \notag \\ &\stackrel{(c)}{\leq} \sqrt{1 - \gamma} \|x_{k} - x_{\textup{ss},k}\|_P + \lambda_{\max}(P)L_x^u \|u_{k+1} - u_{k}\| \notag \\ &~ + \lambda_{\max}(P)L_x^d \|d_{k+1} - d_{k}\|, \\\end{align} \]

(21)

In (20), (a) uses the triangle inequality, and the contraction in (b) follows from the Lyapunov equation and the property of the weighted norm, where \( \gamma = \frac{\lambda_{\min}(\bar{Q})}{\lambda_{\max}(P)} \in (0,1)\) , and \( \lambda_{\min}(\cdot)\) and \( \lambda_{\max}(\cdot)\) represent the minimum and maximum eigenvalues of a matrix, respectively. Moreover, (c) uses the triangle inequality and the Lipschitz continuity of \( x_{\textup{ss},k}\) .

We proceed to analyze the term \( \|u_{k+1} - u_{k}\|\) in (20). For controller (13), Let

\[ \begin{align*} \hat{\nabla} \Phi_{\ell_2,k}(u_{k},y_k) &= 2R u_{k} + 2\lambda \hat{H}^{\top} Q (y_k - r_k) + 2\rho_{\text{gen}} u_{k}, \\ \nabla \Phi_{\ell_2,k}(u_{k}) &= 2R u_{k} + 2\lambda \hat{H}^{\top} Q (\hat{H}u_{k} + d_k - r_k) + 2\rho_{\text{gen}} u_{k} \end{align*} \]

be the update direction and the true gradient at \( u_{k}\) , respectively. Further, let \( L_{\Phi'}^u = \|2R + 2\lambda\hat{H}^\top Q \hat{H}\|\) be the Lipschitz constant of \( \nabla \Phi_{\ell_2,k}\) with respect to \( u\) . Therefore, we have

\[ \begin{align} &\lVert u_{k+1} - u_{k} \rVert = \|u_{k+1} - \eta \hat{\nabla} \Phi_{\ell_2,k}(u_{k},y_k) - u_{k} \| \notag \end{align} \]

(22)

\[ \begin{align} & \stackrel{(a)}{\leq} \eta \lVert \nabla \Phi_{\ell_2,k}(u_{k}) \rVert + \eta \|\hat{\nabla} \Phi_{\ell_2,k}(u_{k},y_k) - \nabla \Phi_{\ell_2,k}(u_{k}) \| \notag \\ & \stackrel{(b)}{\leq} \eta \lVert \nabla \Phi_{\ell_2,k}(u_{k}) - \nabla\Phi_{\ell_2,k}(u_k^*) \rVert \notag \\ &~ + 2\eta \lambda\|\hat{H}^\top Q\|\|y_k - (\hat{H}u_{k}+d_k)\| \notag \\ & \stackrel{(c)}{\leq} \eta L_{\Phi'}^u \|u_{k} - u_{k}^*\| + 2\eta \lambda\|\hat{H}^\top Q\| \frac{\|C\|}{\lambda_{\min}(P)} \|x_k - x_{\textup{ss},k}\|_P, \notag \\ &~ + 2\eta \lambda \|\hat{H}^\top Q\| \|H - \hat{H}\|\|u_k\|, \\\end{align} \]

(23)

where (a) uses the triangle inequality; (b) is because of the triangle inequality and \( u_k^*\) being the optimal point, i.e., \( \nabla\Phi_{\ell_2,k}(u_k^*) = 0\) ; (c) follows from the Lipschitz continuity of \( \nabla \Phi_{\ell_2,k}\) , the addition and subtraction of \( Hu_k+d_k\) inside \( \|y_k - (\hat{H}u_{k}+d_k)\|\) , the expression of \( y_k\) , and the property of the weighted norm. For controller (13), we can perform similar analysis and obtain an upper bound akin to (22), albeit with an additional term \( 2\eta \|\rho_{\textup{col}}\|\) arising from \( 0 \in \nabla\Phi_{\ell_2,k}(u_k^*) + \rho_{\textup{col}}^\top \partial |u_k|\) . We incorporate the above results into (20) and obtain (18), where the constants are

\[ \begin{equation} \begin{split} c_1 &= \sqrt{1-\gamma} + 2\eta \lambda L_x^u \|\hat{H}^\top Q\| \|C\| \frac{\lambda_{\max}(P)}{\lambda_{\min}(P)},\\ c_2 &= \lambda_{\max}(P)L_x^u L_{\Phi'}^u, \\ c_3 &= 2 \lambda L_x^u \lambda_{\max}(P) \|\hat{H}^\top Q\|, \\ c_4 &= \lambda_{\max}(P)L_x^d, \\ c_5 &= \left\{ \begin{array}{ll} 0, & \textup{for } (13), \\ 2 \lambda_{\max}(P)L_x^u\|\rho_{\textup{col}}\|, & \textup{for } (14). \end{array}\right. \end{split} \end{equation} \]

(24)

Therefore, Lemma 1 is proved.

In the following lemma, we characterize the property of the controller iterations (13) and (14). The objective (8) and the quadratic part of (10) are strongly convex and smooth in \( u\) . Let \( \mu_{\Phi}\) and \( L_{\Phi}\) be the corresponding constants of strong convexity and smoothness, respectively. Recall that \( P\) is the matrix appearing in the weighted norm in Lemma 1.

Lemma 2

Let the conditions of Theorem 2 hold. The controller iterations (13) and (14) satisfy

\[ \begin{equation} \begin{split} \|u_{k+1} - u_{k+1}^*\| \leq& \alpha \|u_{k} - u_{k}^*\| + \eta \frac{L_T^y \|C\|}{\lambda_{\min}(P)} \lVert x_k - x_{\textup{ss},k}\rVert \\ &+ \eta L_T^y \|H-\hat{H}\| \|u_{k}\| + \lVert u_{k+1}^* - u_k^* \rVert, \end{split} \end{equation} \]

(25)

where \( \alpha = \sqrt{1-\eta(2\mu_{\Phi} - \eta L_{\Phi}^2)}\) , and \( L_T^y = 2\lambda \|\hat{H}^\top Q\|\) .

Proof

Let the right-hand side of (13) or (14) be denoted by \( T(u_k,y_k)\) . The mapping \( T(u,y)\) is \( \eta L_T^y\) -Lipschitz in \( y\) , where \( L_T^y = 2\lambda \|\hat{H}^\top Q\|\) . Therefore,

\[ \begin{split} \lVert u_{k+1} &- u_{k+1}^* \rVert \stackrel{(a)}{\leq} \lVert T\bigl(u_k, y_k\bigr) - u_k^* \rVert + \|u_{k+1}^* - u_k^*\| \\ & \stackrel{(b)}{\le} \lVert T\bigl(u_{k}, y_k\bigr) - T\bigl(u_{k}, Hu_{k} + d_k\bigr) \rVert \\ &~ + \lVert T\bigl(u_{k}, Hu_{k} + d_k\bigr) - T\bigl(u_{k}, \hat{H}u_{k} + d_k\bigr) \rVert \\ &~ + \lVert T\bigl(u_{k}, \hat{H}u_{k} + d_k\bigr) - u_k^* \rVert + \|u_{k+1}^* - u_k^*\| \\ & \stackrel{(c)}{\le} \eta L_T^y \lVert y_k - (Hu_{k} + d_k)\rVert + \eta L_T^y \|H-\hat{H}\| \|u_{k}\| \\ &~ + \alpha \|u_k - u_k^*\| + \lVert u_{k+1}^* - u_k^* \rVert \\ & \stackrel{(d)}{\le} \eta \frac{L_T^y \|C\|}{\lambda_{\min}(P)} \lVert x_k - x_{\textup{ss},k}\rVert + \eta L_T^y \|H-\hat{H}\| \|u_{k}\| \\ &~+ \alpha \lVert u_{k} - u_{k}^* \rVert + \lVert u_{k+1}^* - u_k^* \rVert, \\ \end{split} \]

where (a) and (b) use the triangle inequality; (c) follows from the Lipschitz continuity of \( T\) and the contraction of \( T\) (see [36, Proposition 25.9], where \( \alpha\) is given in the lemma when \( \eta \in (0, 2\mu_\Phi / L_\Phi^2)\) , and we also use the non-expansiveness property of the proximal operator for (14)); and (d) applies (1) and the property of the weighted norm.

6.2 Proof of Theorem 2

The main idea is to analyze the coupled evolution of state dynamics and controller iterations, whose properties are established in Lemma 1, Lemma 2.

Proof

The coupling between the state dynamics and controller iterations can be compactly written as

\[ \begin{align} &\underbrace{\begin{bmatrix} \|u_{k+1} - u_{k+1}^*\| \\ \lVert x_{k+1} - x_{\textup{ss},k+1} \rVert_P \end{bmatrix}}_{\triangleq w_{k+1}} \leq \underbrace{ \begin{bmatrix} \alpha & \eta \frac{L_T^y \|C\|}{\lambda_{\min}(P)} \\ \eta c_2 & c_1 \end{bmatrix}}_{\triangleq M} \begin{bmatrix} \|u_k - u_k^*\| \\ \lVert x_k - x_{\textup{ss},k} \rVert_P \end{bmatrix} \notag \end{align} \]

(26)

\[ \begin{align} &+ \underbrace{ \begin{bmatrix} \eta L_T^y \|H-\hat{H}\| \|u_{k}\| + \lVert u_{k+1}^* - u_k^* \rVert \\ \eta c_3 \|H - \hat{H}\|\|u_k\| + c_4 \|d_{k+1} - d_{k}\| + \eta c_5 \end{bmatrix}}_{\triangleq q_k}, \\\end{align} \]

(27)

where the constants \( c_1\) to \( c_4\) are given by (24). Note that \( M\) in (26) is a \( 2\) -by-\( 2\) positive matrix, and therefore its Perron eigenvalue equals \( \rho(M)\) . Hence, the requirement that \( \rho(M) < 1\) is equivalent to \( m_{11} + m_{22} - m_{11}m_{22} + m_{21}m_{12} < 1\) , where \( m_{ij}\) denotes the \( ij\) -th element of \( M\) . This inequality translates to

\[ \begin{equation} g(\eta) \triangleq \frac{L_T^y \|C\| c_2}{\lambda_{\min}(P)} \eta^2 + \alpha + c_1 - \alpha c_1 < 1, \end{equation} \]

(28)

where \( \alpha\) and \( c_1\) are given in Lemma 2 and (24), respectively. The function \( g(\eta)\) satisfies \( g(0) = 1, g'(0) < 0\) . Hence, there exists \( \eta^* \in (0, 2\mu_\Phi / L_\Phi^2)\) such that for any \( \eta \in (0, \eta^*)\) , \( g(\eta) < 1\) , implying \( \rho(M) < 1\) . We telescope (26) and obtain

\[ \begin{equation} w_k \leq M^k w_0 + \sum^{k-1}_{i=0} M^{k-i} q_{i+1}. \end{equation} \]

(29)

When \( \eta \in (0, \eta^*)\) , there exists \( r > 0\) and \( c_M \in [0, 1)\) such that \( \lVert M^k \rVert \leq r (c_M)^k\) , see also [33, Chapter 5]. Hence, we obtain from (29) the following inequality

\[ \begin{split} \lVert w_k \rVert & \leq r (c_M)^k \lVert w_0 \rVert + \sum^{k-1}_{i=0} r (c_M)^{k-i} \lVert q_{i+1} \rVert \\ & \stackrel{(a)}{\le} r (c_M)^k \lVert w_0 \rVert + r c_M \lVert \bar{q} \rVert \sum^{k-1}_{i=0} (c_M)^i \\ & \stackrel{(b)}{\le} r (c_M)^k \lVert w_0 \rVert + r \frac{c_M}{1 - c_M} \lVert \bar{q} \rVert, \\ \end{split} \]

while (a) is due to \( \bar{q} \triangleq \sup_{i \in [k]} q_i\) , and (b) uses the upper bound on the partial sum of a geometric series. Therefore, (15) is proved.

Acknowledgement

We thank Prof. Linbin Huang for inspirational discussions.

References

[1] Andrea Simonetto and Emiliano Dall'Anese and Santiago Paternain and Geert Leus and Georgios B Giannakis Time-varying convex optimization: Time-structured algorithms and applications Proceedings of the IEEE 2020 108 11 2032–2048

[2] Adrian Hauswirth and Zhiyu He and Saverio Bolognani and Gabriela Hug and Florian Dörfler Optimization Algorithms as Robust Feedback Controllers Annual Reviews in Control 2024 57 A

[3] Dinesh Krishnamoorthy and Sigurd Skogestad Real-time optimization as a feedback control problem-A review Computers & Chemical Engineering 2022 A

[4] Kartik B Ariyur and Miroslav Krstić Real-time optimization by extremum-seeking control John Wiley & Sons 2003 USA

[5] Alejandro Marchetti and Benoit Chachuat and Dominique Bonvin Modifier-adaptation methodology for real-time optimization Industrial & Engineering Chemistry Research 2009 48 13 6022–6033

[6] Moritz Diehl and Hans Georg Bock and Johannes P Schlöder A real-time iteration scheme for nonlinear optimization in optimal feedback control SIAM Journal on Control and Optimization 2005 43 5 1714–1736

[7] Marcello Colombino and John W. Simpson-Porco and Andrey Bernstein Towards robustness guarantees for feedback-based optimization Proceedings of IEEE 58th Conference on Decision and Control 2019 6207–6214

[8] Adrian Hauswirth and Saverio Bolognani and Gabriela Hug and Florian Dörfler Timescale Separation in Autonomous Optimization IEEE Transactions on Automatic Control 2021 66 2 611–624

[9] Liam S. P. Lawrence and John W. Simpson-Porco and Enrique Mallada Linear-Convex Optimal Steady-State Control IEEE Transactions on Automatic Control 2021 66 11 5377–5384

[10] Gianluca Bianchin and Jorge Cortés and Jorge I Poveda and Emiliano Dall’Anese Time-varying optimization of LTI systems via projected primal-dual gradient flows IEEE Transactions on Control of Network Systems 2021 9 1 474–486

[11] Marcello Colombino and Emiliano Dall’Anese and Andrey Bernstein Online Optimization as a Feedback Controller: Stability and Tracking IEEE Transactions on Control of Network Systems 2020 7 1 422–432

[12] Giuseppe Belgioioso and Dominic Liao-McPherson and Mathias Hudoba de Badyn and Saverio Bolognani and Roy S Smith and John Lygeros and Florian Dörfler Online feedback equilibrium seeking IEEE Transactions on Automatic Control 2025 70 1 203-218

[13] Ana M Ospina and Nicola Bastianello and Emiliano Dall’Anese Feedback-based optimization with sub-Weibull gradient errors and intermittent updates IEEE Control Systems Letters 2022 6 2521–2526

[14] Liliaokeawawa Cothren and Gianluca Bianchin and Emiliano Dall'Anese Online Optimization of Dynamical Systems With Deep Learning Perception IEEE Open Journal of Control Systems 2022 1 306-321

[15] Lukas Ortmann and Christian Rubin and Alessandro Scozzafava and Janick Lehmann and Saverio Bolognani and Florian Dörfler Deployment of an Online Feedback Optimization Controller for Reactive Power Flow Optimization in a Distribution Grid Proceedings of IEEE PES ISGT Europe 2023

[16] Andrea Simonetto and Emiliano Dall’Anese and Julien Monteil and Andrey Bernstein Personalized optimization with user’s feedback Automatica 2021 131 109767

[17] Dinesh Krishnamoorthy and Francis J Doyle III Model-free real-time optimization of process systems using safe Bayesian optimization AIChE Journal 2023 69 4 A

[18] Wenjie Xu and Colin N. Jones and Bratislav Svetozarevic and Christopher R. Laughman and Ankush Chakrabarty Violation-aware contextual Bayesian optimization for controller performance optimization with unmodeled constraints Journal of Process Control 2024 138 103212

[19] Jorge I Poveda and Andrew R Teel A robust event-triggered approach for fast sampled-data extremization and learning IEEE Transactions on Automatic Control 2017 62 10 4949–4964

[20] Yue Chen and Andrey Bernstein and Adithya Devraj and Sean Meyn Model-free primal-dual methods for network optimization with application to real-time optimal power flow Proceedings of American Control Conference 2020 3140–3147

[21] Yujie Tang and Zhaolin Ren and Na Li Zeroth-order feedback optimization for cooperative multi-agent systems Automatica 2023 148 A

[22] Zhiyu He and Saverio Bolognani and Jianping He and Florian Dörfler and Xinping Guan Model-Free Nonlinear Feedback Optimization IEEE Transactions on Automatic Control 2024 69 7 4554-4569

[23] Xin Chen and Jorge I Poveda and Na Li Continuous-time zeroth-order dynamics with projection maps: Model-free feedback optimization with safety guarantees IEEE Transactions on Automatic Control 2025

[24] Miguel Picallo and Lukas Ortmann and Saverio Bolognani and Florian Dörfler Adaptive real-time grid operation via Online Feedback Optimization with sensitivity estimation Electric Power Systems Research 2022 212 A

[25] Alejandro D Dominguez-Garcia and Madi Zholbaryssov and Temitope Amuda and Olaoluwapo Ajala An Online Feedback Optimization Approach to Voltage Regulation in Inverter-Based Power Distribution Networks Proceedings of American Control Conference 2023 1868-1873

[26] Anurag Agarwal and John W. Simpson-Porco and Lacra Pavel Model-Free Game-Theoretic Feedback optimization Proceedings of European Control Conference 2023 1-8

[27] Gianluca Bianchin and Miguel Vaquero and Jorge Cortés and Emiliano Dall'Anese Online Stochastic Optimization for Unknown Linear Systems: Data-Driven Controller Synthesis and Analysis IEEE Transactions on Automatic Control 2024 69 7 4411-4426

[28] Huan Xu and Constantine Caramanis and Shie Mannor Robust Regression and Lasso IEEE Transactions on Information Theory 2010 56 7 3561-3574

[29] Dimitris Bertsimas and David B Brown and Constantine Caramanis Theory and applications of robust optimization SIAM Review 2011 53 3 464–501

[30] Linbin Huang and Jianzhe Zhen and John Lygeros and Florian Dörfler Robust data-enabled predictive control: Tractable formulations and performance guarantees IEEE Transactions on Automatic Control 2023 68 5 3163–3170

[31] Aharon Ben-Tal and Arkadi Nemirovski Selected topics in robust convex optimization Mathematical Programming 2008 112 125–158

[32] Laurent El Ghaoui and Hervé Lebret Robust Solutions to Least-Squares Problems with Uncertain Data SIAM Journal on Matrix Analysis and Applications 1997 18 4 1035–1064

[33] J. P. LaSalle The Stability and Control of Discrete Processes Springer 1986 62 USA

[34] Ray Daniel Zimmerman and Carlos Edmundo Murillo-Sánchez and Robert John Thomas MATPOWER IEEE Transactions on Power Systems 2011 26 1 12–19

[35] Lukas Ortmann and Saverio Bolognani and Florian Dörfler and Jean Maeght and Patrick Panciatici UNICORN

[36] Heinz H. Bauschke and Patrick L. Combettes Convex Analysis and Monotone Operator Theory in Hilbert Spaces Springer 2011 USA