If you see this, something is wrong
To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.
Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.
Generally speaking, anything that is blue is clickable.
Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.
Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.
Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.
By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.
If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.
First published on Wednesday, Jul 2, 2025 and last modified on Wednesday, Jul 2, 2025 by François Chaplais.
Automatic Control Laboratory, ETH Zürich, Switzerland Email
Automatic Control Laboratory, ETH Zürich, Switzerland and Max Planck Institute for Intelligent Systems, Tübingen, Germany Email and Email
Automatic Control Laboratory, ETH Zürich, Switzerland Email
Automatic Control Laboratory, ETH Zürich, Switzerland Email
Max Planck Institute for Intelligent Systems, Tübingen, Germany Email
Automatic Control Laboratory, ETH Zürich, Switzerland Email
Feedback optimization optimizes the steady state of a dynamical system by implementing optimization iterations in closed loop with the plant. It relies on online measurements and limited model information, namely, the input-output sensitivity. In practice, various issues including inaccurate modeling, lack of observation, or changing conditions can lead to sensitivity mismatches, causing closed-loop sub-optimality or even instability. To handle such uncertainties, we pursue robust feedback optimization, where we optimize the closed-loop performance against all possible sensitivities lying in specific uncertainty sets. We provide tractable reformulations for the corresponding min-max problems via regularizations and characterize the online closed-loop performance through the tracking error in case of time-varying optimal solutions. Simulations on a distribution grid illustrate the effectiveness of our robust feedback optimization controller in addressing sensitivity mismatches in a non-stationary environment.
Acknowledgement: this work was supported by the Max Planck ETH Center for Learning Systems, the SNSF via NCCR Automation (grant agreement 51NF40 80545), and the German Research Foundation.
Modern engineering systems are increasingly complex, large-scale, and variable, as seen in power grids, supply chains, and recommender systems. Achieving optimal steady-state operation of these systems is both critical and challenging. In this regard, numerical optimization pipelines operate in an open-loop manner, whereby solutions are found based on an explicit formulation of the input-output map of the system and knowledge of disturbances. However, the reliance on accurate models poses restrictions and renders these pipelines unfavorable in complex environments.
Feedback optimization is an emerging paradigm for steady-state optimization of a dynamical system [1, 2, 3]. At the heart of feedback optimization is the interconnection between an optimization-based controller and a physical system. This closed-loop approach shares a similar spirit to extremum seeking[4], modifier adaptation[5], and real-time iterations[6]. Nonetheless, feedback optimization effectively handles high-dimensional objectives and coupling constraints, adapts to non-stationary conditions, and entails less computational effort (see review in [2]).
Thanks to the iterative structure that incorporates real-time measurements and performance objectives, feedback optimization enjoys closed-loop stability[7], optimality[8, 9], constraint satisfaction[10], and online adaptation[11, 12, 13, 14, 15]. However, these salient properties rely on limited model information, i.e., the input-output sensitivity of a system. This requirement follows from using the chain rule to construct gradients in iterative updates. In practice, different issues can render the sensitivity inaccurate or elusive, e.g., corrupted data, lack of measurements, or changing conditions. As we will show in Section 2.2, such sensitivity errors can accumulate in the closed loop and cause significant sub-optimality or even divergence.
Many approaches have been developed to address inexact sensitivities in feedback optimization. A major stream leverages model-free iterations, where controllers entirely bypass sensitivities. Such model-free operations are typically enabled by derivative-free optimization, including Bayesian[16, 17, 18] and zeroth-order optimization[19, 20, 21, 22, 23]. However, controllers based on Bayesian optimization tend to be computationally expensive for high-dimensional problems, whereas zeroth-order feedback optimization brings increased sample complexity. Therefore, it is desirable to incorporate structural, albeit inexact, sensitivity information into controller iterations rather than discard it altogether.
There are two primary solutions to handle model uncertainty without resorting to model-free iterations: adaptation and robustness. In the context of feedback optimization, adaptive schemes leverage offline or online data to refine knowledge of sensitivities, thereby facilitating closed-loop convergence. Examples include learning sensitivity via least squares[24, 25] or stochastic approximation[26], as well as constructing behavioral representations of sensitivity from input-output data[27]. However, adaptive strategies impose additional requirements for data, computation, and estimation. Restrictions arise in scenarios involving high-dimensional systems and limited computational power, where sensitivity estimation can be challenging.
In this paper, we consider robust feedback optimization, where the closed-loop performance is optimized given the worst-case realization of the sensitivity in some uncertainty sets. This is formalized as a min-max problem for which tractable reformulations via regularization are further provided. Our robust feedback optimization controllers feature provable convergence guarantees for time-varying problems with changing disturbances and references. Compared to the above adaptive schemes, our controllers only leverage an inexact sensitivity and hence are easy to implement. In contrast to related robust strategies in learning[28, 29] and data-driven control[30], we tackle a more demanding setting wherein model uncertainty is intertwined with both system dynamics and controller iterations. Our main contributions are as follows.
The rest of this paper is organized as follows. Section 2 motivates and presents the problem setup. Section 3 provides tractable reformulations and our robust feedback optimization controllers. The closed-loop performance guarantee is established in Section 4, followed by numerical evaluations on a distribution grid in Section 5. Finally, Section 6 concludes the article and discusses future directions.
We consider the following dynamical system
(1)
where \( x_k \in \mathbb{R}^n\) , \( u_k \in \mathbb{R}^m\) , \( y_k \in \mathbb{R}^p\) , \( d_{x,k} \in \mathbb{R}^n\) , and \( d_{y,k} \in \mathbb{R}^p\) denote the state, input, output, exogenous disturbance, and measurement noise at time \( k\) , respectively. Further, \( A \in \mathbb{R}^{n \times n}\) , \( B \in \mathbb{R}^{n \times m}\) , and \( C \in \mathbb{R}^{p \times n}\) are system matrices. We focus on a stable system, i.e., the spectral radius \( \rho(A)\) of \( A\) in (1) is less than \( 1\) . In practice, this condition also holds if this system is prestabilized. Given fixed inputs and disturbances (i.e., \( u_k = u, d_{x,k} = d_x, d_{y,k} = d_y, \forall k \in \mathbb{N}\) ), system (1) admits a unique steady-state output
(2)
In (2), \( H \in \mathbb{R}^{p \times m}\) is the sensitivity matrix of system (1).
A performance objective characterizing the input-output performance of system (1) at each time \( k \in \mathbb{N}\) is
(3)
where \( R \in \mathbb{R}^{m \times m}\) and \( Q \in \mathbb{R}^{p \times p}\) are positive semidefinite matrices, \( \|u\|_R = \sqrt{u^\top R u}\) and \( \|y\|_Q = \sqrt{y^\top Q y}\) denote weighted norms, \( \lambda \geq 0\) is a weight parameter, and \( r_k \in \mathbb{R}^p\) is the reference at time \( k\) . Further, \( y_{\textup{ss}}(u, d_k) = Hu + d_k\) is the steady-state output associated with the input \( u\) and the disturbance \( d_k \triangleq C(I-\!A)^{-1}d_{x,k} + d_{y,k}\) at time \( k\) . The function (3) penalizes the input cost and the difference between the steady-state output and the reference.
To optimize (3), numerical solvers requires an explicit knowledge of the map \( y_{\textup{ss}}\) as per (2) with an accurate value of the disturbance \( d\) , which can be restrictive in applications. In contrast, feedback optimization leverages real-time output measurements and the limited model information, namely, the sensitivity matrix \( H\) , thus steering system (1) to optimal operating conditions [1, 2].
Many practical issues including lack of measurements, corrupted data, and changing conditions cause model uncertainty, i.e., sensitivity errors[2]. We present a motivating example to show how such errors invalidate feedback optimization by inducing closed-loop sub-optimality or instability. While this example is synthetic, we observe similar phenomena in realistic power grid simulations (see Section 5).
We consider a system abstracted by the steady-state map (2) with fixed disturbances. We generate inexact sensitivities \( \hat{H}\) in the following two fashions.
To optimize (3), consider the following feedback optimization controller with an inexact \( \hat{H}\)
(4)
where \( \eta > 0\) is the step size. The update (4) follows a gradient descent iteration given the objective (3), using the real-time output measurement \( y_k\) of (1) to replace the steady-state output \( H u_k + d_k\) . Figure 1 illustrates the closed-loop performance when the controller (4) is applied to the system (1). We observe from Figure 2 that larger errors in sensitivities cause increased sub-optimality. Furthermore, Figure 3 demonstrates that when \( \eta\) is fixed, such a negative effect becomes more pronounced when the problem dimension grows.
Motivated by the above observations, we pursue robust feedback optimization, where we optimize a worst-case performance objective given any realization of sensitivity lying in uncertainty sets. In practice, we can obtain through prior knowledge or identification[15, 2] an inexact sensitivity \( \hat{H}\) , which differs from the true sensitivity \( H\) of (1) by \( \Delta_H\) , i.e., \( \hat{H} + \Delta_H = H\) . In view of \( \hat{H}\) and the uncertainty \( \Delta_H\) , our robust formulation is
(5)
where \( \mathcal{D} \subset \mathbb{R}^{p \times m}\) is the uncertainty set wherein \( \Delta_H\) lies, \( d_k = C(I-\!A)^{-1}d_{x,k} + d_{y,k}\) aggregates the disturbances \( d_{x,k}\) and \( d_{y,k}\) , and \( r_k\) is the reference at time \( k\) . Different from (3), in (5) we robustify the steady-state specification of system (1) against the sensitivity uncertainty \( \Delta_H\) . Essentially, (5) implies minimizing the steady-state input-output performance for the worst-case realization of sensitivity. We examine the following types of uncertainty sets.
Generalized uncertainties described by
(6)
where \( \varrho_{\text{gen}} \geq 0\) , and \( \|\cdot\|_F\) denotes the Frobenius norm.
Uncorrelated column-wise uncertainties of the form
(7)
where \( (\lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H)_i\) denotes the \( i\) -th column of the matrix \( \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H\) and \( (\varrho_{\text{col}})_i\) denotes the \( i\) -th element of the vector \( (\varrho_{\text{col}}) \in \mathbb{R}^m\) , with \( (\varrho_{\text{col}})_i \geq 0\) .
In the above sets, \( \mathcal{D}_{\text{gen}}\) poses a bounded-norm restriction on the uncertainty \( \Delta_H\) . In contrast, \( \mathcal{D}_{\text{col}}\) bounds the norm of each column of \( \Delta_H\) , which is useful when different levels of confidence exist regarding how each component of \( u\) affects the output \( y\) . Both types are common in the robust optimization literature [29, 28, 30].
While problem (5) is unconstrained, we will discuss strategies to handle input and output constraints at the end of Section 3.2. We consider quadratic objectives in (5) to highlight intuition and facilitate the presentation of robust strategies. Promising extensions to handle general objectives can be built on modern advances in robust optimization [31].
We provide tractable reformulations of problem (5), thereby facilitating the design of robust feedback optimization controllers. Let \( u_k^* \in \mathbb{R}^m\) be the optimal point of problem (5) at time \( k\) . We analyze two cases involving the uncertainty sets discussed in Section 2.3.
\( \bullet\) The case with generalized uncertainties
For problem (5) with the uncertainty set \( \mathcal{D} = \mathcal{D}_{\text{gen}}\) (see (6)), the reformulated problem is
(8)
where the regularizer satisfies
(9)
\( \bullet\) The case with uncorrelated column-wise uncertainties
The reformulated problem associated with (5) involving \( \mathcal{D} = \mathcal{D}_{\text{col}}\) (see (7)) is
(10)
where \( |u| \in \mathbb{R}^m\) denotes the component-wise absolute value of \( u\) , and the regularizer \( \rho_{\textup{col}} \in \mathbb{R}^m\) satisfies
(11)
The following theorem establishes that the above reformulated problems share the same optimal points as problem (5).
Theorem 1
Problem (5) with the uncertainty set \( \mathcal{D} = \mathcal{D}_{\textup{gen}}\) in (6) and problem (8) share the same optimal point. Moreover, problem (5) with \( \mathcal{D} = \mathcal{D}_{\textup{col}}\) in (7) and problem (10) attain the same optimal point.
Proof
The objective of (5) can be compactly written as
(12)
where \( M = \big[R^{\frac{1}{2}}; \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \hat{H}\big] \in \mathbb{R}^{(m+p) \times m}\) , and \( \Delta_M = \big[0; \lambda^{\frac{1}{2}} Q^{\frac{1}{2}} \Delta_H\big] \in \mathbb{R}^{(m+p) \times m}\) . Since the norm is non-negative, optimizing (12) is equivalent to optimizing \( \lVert (M + \Delta_{M}) u + d_k - r_k \rVert\) . When \( \mathcal{D} = \mathcal{D}_{\text{gen}}\) , we perform an analysis similar to that in [32, Theorem 3.1] and obtain
Moreover, the following two problems
share the same optimal point \( u_k^*\) if the condition (9) holds. This result can be proved by comparing the optimality conditions of both problems and noting that \( 0\) is a subgradient of \( \|u\|\) at \( u=0\) .
We proceed to analyze the case when \( \mathcal{D} = \mathcal{D}_{\text{col}}\) . Analogous to [28, Theorem 1], we obtain
Furthermore, the optimal points of the following problems
coincide when the regularizer satisfies the condition (11).
The \( \ell_2\) -regularizer in (8) and the \( \ell_1\) -regularizer in (10) admit the same interpretation as those in classical ridge and lasso regression[29]. In essence, these regularization terms penalize the magnitude of the input, helping to achieve closed-loop stability in the face of model uncertainty. The use of the \( \ell_1\) -regularizer also promotes sparsity of control inputs.
Remark 1
While the expressions of \( \rho_\textup{gen}\) and \( \rho_{\textup{col}}\) involve \( \|u_k^*\|\) , this dependence arises from the quadratic objective in (5) and the related step of equivalent reformulation. In practice, for a variation of (5) with non-squared \( \ell_2\) -norms, the regularizers in the reformulated problems will only entail the uncertainty bounds \( \varrho_{\text{gen}}\) and \( \varrho_{\text{col}}\) but not \( \|u_k^*\|\) .
Based on the reformulations in Section 3.1, we present our online robust feedback optimization controllers. These controllers leverage an inexact sensitivity \( \hat{H}\) and real-time output measurements of system (1). They employ optimization-based iterations, thereby driving the system to operating points characterized by (8) or (10).
For problem (8) corresponding to the case with generalized uncertainties, our robust feedback optimization controller employs the following gradient-based update
(13)
where \( \eta > 0\) is the step size. The update direction of the controller (13) is related to the negative gradient of (8) at \( u_k\) . Further, (13) uses the output measurement \( y_k\) of the true system (1) as feedback.
Problem (10), associated with the case with uncorrelated column-wise uncertainties, involves a nonsmooth regularizer \( \|u\|\) . Hence, building on proximal gradient descent, the proposed controller updates as follows:
(14)
where \( \eta > 0\) is the step size. In (14), \( \rm{\operatorname{prox}}_{\eta \rho_{\textup{col}}}(u) \triangleq \arg\min_{u' \in \mathbb{R}^m} \eta \rho_{\textup{col}}^\top |u| + \frac{1}{2}\|u' - u\|^2\) denotes the proximal operator of \( \eta \rho_{\textup{col}}^\top |u|\) , i.e., soft thresholding \( \rm{\operatorname{sgn}}(u_i) \max\{|u_i| - \eta (\rho_{\textup{col}})_i\}\) , where \( \rm{\operatorname{sgn}}(\cdot)\) is the sign function. Similar to (13), this controller uses the real-time output \( y_k\) of system (1) and iteratively calculates new inputs.
We further discuss various extensions for the proposed controllers (13) and (14). In practice, restrictions on the input due to actuation limits or economic conditions can often be represented as a constraint set \( \mathcal{U} \subset \mathbb{R}^m\) . In this regard, we can project \( u_k\) generated by (13) and (14) back to \( \mathcal{U}\) , thereby satisfying constraint satisfaction at every time step. Should output constraints be imposed e.g. from safety requirements, we can augment the objectives in (8) and (10) with suitable penalty (e.g., quadratic or log-barrier) functions and incorporate the resulting derivative terms into the updates (13) and (14), see also [2, Section 2.4].
We present the performance guarantee of the closed-loop interconnection between system (1) and our robust feedback optimization controller. A major challenge is that sensitivity uncertainty is interlaced with system dynamics and controller iterations, complicating convergence analysis. To address this challenge, we analyze the coupled evolution of the system (1) and the proposed controller, while characterizing the cumulative effects of sensitivity uncertainty.
Recall that \( u_k^*\) is the optimal point of problem (5) at time \( k\) , and that \( d_k \triangleq C(I-\!A)^{-1}d_{x,k} + d_{y,k}\) aggregates the disturbances. We consider stable system (1), i.e., \( \rho(A) < 1\) . Let \( x_{\textup{ss},k} \in \mathbb{R}^n\) be the steady state of (1) induced by \( u_{k}\) and \( d_{x,k}\) . In other words, \( x_{\textup{ss},k} = Ax_{\textup{ss},k} + Bu_{k} + d_{x,k}\) , implying \( x_{\textup{ss},k} = (I-A)^{-1}(Bu_{k} + d_{x,k})\) . For any given positive definite \( \bar{Q} \in \mathbb{R}^{n \times n}\) , there exists a unique positive definite \( P \in \mathbb{R}^{n \times n}\) satisfying the Lyapunov equation \( A^\top P A- P + \bar{Q} = 0\) . Let \( \|x\|_P \triangleq \sqrt{x^\top P x}\) be the weighted norm and \( \lambda_{\max}(P)\) be the maximum eigenvalue of \( P\) . Our performance guarantee is as follows.
Theorem 2
Let system (1) be stable. There exists \( \eta^* > 0\) such that for any \( \eta \in (0, \eta^*]\) , the closed-loop interconnection between (1) and the controller (13) or (14) guarantees
(15)
(16)
where \( r_1, r_2 > 0\) , and \( c_M \in [0, 1)\) . Moreover,
where the constants are \( \bar{c}_1 = 2\lambda \|\hat{H}^\top Q\|\) , \( c_3 = 2 \lambda \|(I-\!A)^{-1}B\| \lambda_{\max}(P) \|\hat{H}^\top Q\|\) , \( c_4 = \lambda_{\max}(P)\|(I-\!A)^{-1}\|\) , and
Proof
The proof is provided in Section 6.2.
In Theorem 2, we characterize the closed-loop performance through the joint evolution of the distance \( \|u_{k} - u_{k}^*\|\) to the optimal point \( u_k^*\) and the distance \( \|x_{k} - x_{\textup{ss},k}\|\) to the steady state \( x_{\textup{ss},k}\) . The upper bound (15) is in the flavor of input-to-state stability [33] and similar to [12, 10, 14, 13]. In contrast to these works, we additionally characterize in (15) the cumulative effects of the given sensitivity uncertainty (i.e., \( \|H - \hat{H}\|\) ) and the regularizer corresponding to the uncertainty set (i.e., \( \rho_{\textup{col}}\) ). The effect of the initial conditions \( u_0\) and \( x_0\) vanishes exponentially fast, since \( c_M \in (0,1)\) . The asymptotic error is proportional to the shifts of optimal solutions \( u_k^*\) and disturbances \( d_k\) , as well as the sensitivity uncertainty, i.e., \( \|H - \hat{H}\|\) . The influence arising from this uncertainty can be tuned via the step size, see the terms in \( q_k\) . It is possible to further establish upper bounds on the distance of the output to the optimal steady-state output through the Lipschitz property of the dynamics (1).
We present a case study in a distribution grid to showcase the effectiveness of our robust feedback optimization controllers. Specifically, we consider real-time voltage regulation while minimizing active power curtailment and reactive power actuation. Our goal is to show that robustification is effective beyond an academic setting for theoretical guarantees and can address practical challenges such as nonlinear steady states and state-dependent sensitivities.
Consider a distribution grid with \( n \in \mathbb{N}\) photovoltaic inverters. Let \( p_{i,k}\) , \( q_{i,k}\) , \( p^{\textup{MPP}}_{i, k}\) , and \( v_{i,k}\) denote the active power, reactive power, maximum power point, and voltage of inverter \( i\) at time \( k\) , respectively, where \( i=1,…,n\) . Let \( u_{i,k} \triangleq [p_{i,k} - p^{\text{MPP}}_{i, k}, q_{i,k}]\) be the variable of inverter \( i\) . Let \( u_k = [u_{1,k}, …, u_{n,k}]\) and \( v_k = [v_{1,k}, …, v_{n,k}]\) be concatenated variables. Further, \( d_k\) represents the load at time \( k\) . The nonlinear map from \( u_k\) and \( d_k\) to \( v_k\) is given by the power flow solver [34]. We aim to regulate grid voltage and minimize renewable energy curtailments and reactive power actuation. This is formalized by the following problem
(17)
where \( R \in \mathbb{R}^{2n \times 2n}\) and \( Q \in \mathbb{R}^{n \times n}\) are positive definite cost matrices, \( \mathcal{U}_{i,k} \triangleq \{ [p_i, \, q_i] : 0 \leq p_i \leq p^{\text{MPP}}_{i, k}, q_{\min} \leq q_i \leq q_{\max} \}\) is the constraint set, and \( q_{\min}\) and \( q_{\max}\) are lower and upper bounds on reactive actuation, respectively.
We adopt the UNICORN 56-bus test case [35] with 5 photovoltaic inverters. Although the input-output sensitivity is a nonlinear function of \( u_k\) , we learn a constant approximation \( \hat{H}\) based on power flow linearization and historical data of the injected powers and voltages. This sensitivity becomes even more inexact when the distribution grid experiences a topology change, specifically when the point of common coupling is switched from bus 1 to bus 26. While the uncertainty set for \( \hat{H}\) is hard to characterize correctly, we tune the regularizers of (13) and (14) by gradually decreasing their values from conservative upper bounds. We augment the standard feedback optimization controller (4) and the proposed controllers (13) and (14) with projection to \( \mathcal{U}_{i,k}\) , use the same step size for these controller, and apply them to the changed grid.
As shown in the first sub-figure of Figure 5, when implemented in a new environment with sensitivity uncertainty, standard feedback optimization causes oscillations and voltage violation. Note that the dashed lines in the sub-figures on the first row denote the maximum and minimum voltage limits, which equal \( 1.1\) p.u. and \( 0.9\) p.u., respectively. This standard controller also requires large reactive power actuation. In contrast, robust feedback optimization controllers maintain voltage stability after the point of common coupling changes. This is achieved by conservatively regulating control inputs, a consequence of regularization in the face of uncertainty. Furthermore, as shown in the sub-figures on the last row, the proposed controllers lead to less active power curtailments compared to the standard approach. Overall, robust feedback optimization effectively handles model uncertainty in this example of real-time voltage regulation.
We addressed steady-state optimization of a dynamical system subject to model uncertainty by presenting robust feedback optimization, which seeks optimal closed-loop performance given all possible sensitivities falling within bounded uncertainty sets. Tractable reformulations for this min-max problem via regularized steady-state specifications were then provided. We also showcased the adaptation and robustness of our controllers through theoretical tracking guarantees and numerical experiments on a distribution grid. Future avenues include tuning regularizers via differentiable programming, incorporating regularization into online sensitivity learning, and pursuing robustness against model uncertainty for nonlinear stable systems.
We provide two lemmas that quantify the state dynamics (1) and the controller iterations (13) and (14).
Lemma 1
Let the conditions of Theorem 2 hold. The state dynamics (1) ensure
(18)
(19)
where \( c_1,c_2,c_3,c_4\) , and \( c_5\) are constants specified in (24).
Proof
We start by analyzing the dynamics (1). Recall that \( x_{\textup{ss},k} = (I-A)^{-1}(Bu_{k} + d_{x,k})\) is the steady state of (1) induced by \( u_k\) and \( d_{x,k}\) . Let \( L_x^u \triangleq \|(I-A)^{-1}B\|\) and \( L_x^d \triangleq \|(I-A)^{-1}\|\) be the Lipschitz constants of \( x_{\textup{ss},k}\) with respect to \( u\) and \( d\) , respectively. Since \( \rho(A) < 1\) , for any given positive definite \( \bar{Q} \in \mathbb{R}^{n \times n}\) , there exists a unique positive definite \( P \in \mathbb{R}^{n \times n}\) satisfying the Lyapunov equation \( A^\top P A- P + \bar{Q} = 0\) . Therefore,
(20)
(21)
In (20), (a) uses the triangle inequality, and the contraction in (b) follows from the Lyapunov equation and the property of the weighted norm, where \( \gamma = \frac{\lambda_{\min}(\bar{Q})}{\lambda_{\max}(P)} \in (0,1)\) , and \( \lambda_{\min}(\cdot)\) and \( \lambda_{\max}(\cdot)\) represent the minimum and maximum eigenvalues of a matrix, respectively. Moreover, (c) uses the triangle inequality and the Lipschitz continuity of \( x_{\textup{ss},k}\) .
We proceed to analyze the term \( \|u_{k+1} - u_{k}\|\) in (20). For controller (13), Let
be the update direction and the true gradient at \( u_{k}\) , respectively. Further, let \( L_{\Phi'}^u = \|2R + 2\lambda\hat{H}^\top Q \hat{H}\|\) be the Lipschitz constant of \( \nabla \Phi_{\ell_2,k}\) with respect to \( u\) . Therefore, we have
(22)
(23)
where (a) uses the triangle inequality; (b) is because of the triangle inequality and \( u_k^*\) being the optimal point, i.e., \( \nabla\Phi_{\ell_2,k}(u_k^*) = 0\) ; (c) follows from the Lipschitz continuity of \( \nabla \Phi_{\ell_2,k}\) , the addition and subtraction of \( Hu_k+d_k\) inside \( \|y_k - (\hat{H}u_{k}+d_k)\|\) , the expression of \( y_k\) , and the property of the weighted norm. For controller (13), we can perform similar analysis and obtain an upper bound akin to (22), albeit with an additional term \( 2\eta \|\rho_{\textup{col}}\|\) arising from \( 0 \in \nabla\Phi_{\ell_2,k}(u_k^*) + \rho_{\textup{col}}^\top \partial |u_k|\) . We incorporate the above results into (20) and obtain (18), where the constants are
(24)
Therefore, Lemma 1 is proved.
In the following lemma, we characterize the property of the controller iterations (13) and (14). The objective (8) and the quadratic part of (10) are strongly convex and smooth in \( u\) . Let \( \mu_{\Phi}\) and \( L_{\Phi}\) be the corresponding constants of strong convexity and smoothness, respectively. Recall that \( P\) is the matrix appearing in the weighted norm in Lemma 1.
Lemma 2
Let the conditions of Theorem 2 hold. The controller iterations (13) and (14) satisfy
(25)
where \( \alpha = \sqrt{1-\eta(2\mu_{\Phi} - \eta L_{\Phi}^2)}\) , and \( L_T^y = 2\lambda \|\hat{H}^\top Q\|\) .
Proof
Let the right-hand side of (13) or (14) be denoted by \( T(u_k,y_k)\) . The mapping \( T(u,y)\) is \( \eta L_T^y\) -Lipschitz in \( y\) , where \( L_T^y = 2\lambda \|\hat{H}^\top Q\|\) . Therefore,
where (a) and (b) use the triangle inequality; (c) follows from the Lipschitz continuity of \( T\) and the contraction of \( T\) (see [36, Proposition 25.9], where \( \alpha\) is given in the lemma when \( \eta \in (0, 2\mu_\Phi / L_\Phi^2)\) , and we also use the non-expansiveness property of the proximal operator for (14)); and (d) applies (1) and the property of the weighted norm.
The main idea is to analyze the coupled evolution of state dynamics and controller iterations, whose properties are established in Lemma 1, Lemma 2.
Proof
The coupling between the state dynamics and controller iterations can be compactly written as
(26)
(27)
where the constants \( c_1\) to \( c_4\) are given by (24). Note that \( M\) in (26) is a \( 2\) -by-\( 2\) positive matrix, and therefore its Perron eigenvalue equals \( \rho(M)\) . Hence, the requirement that \( \rho(M) < 1\) is equivalent to \( m_{11} + m_{22} - m_{11}m_{22} + m_{21}m_{12} < 1\) , where \( m_{ij}\) denotes the \( ij\) -th element of \( M\) . This inequality translates to
(28)
where \( \alpha\) and \( c_1\) are given in Lemma 2 and (24), respectively. The function \( g(\eta)\) satisfies \( g(0) = 1, g'(0) < 0\) . Hence, there exists \( \eta^* \in (0, 2\mu_\Phi / L_\Phi^2)\) such that for any \( \eta \in (0, \eta^*)\) , \( g(\eta) < 1\) , implying \( \rho(M) < 1\) . We telescope (26) and obtain
(29)
When \( \eta \in (0, \eta^*)\) , there exists \( r > 0\) and \( c_M \in [0, 1)\) such that \( \lVert M^k \rVert \leq r (c_M)^k\) , see also [33, Chapter 5]. Hence, we obtain from (29) the following inequality
while (a) is due to \( \bar{q} \triangleq \sup_{i \in [k]} q_i\) , and (b) uses the upper bound on the partial sum of a geometric series. Therefore, (15) is proved.
We thank Prof. Linbin Huang for inspirational discussions.
[1] Time-varying convex optimization: Time-structured algorithms and applications Proceedings of the IEEE 2020 108 11 2032–2048
[2] Optimization Algorithms as Robust Feedback Controllers Annual Reviews in Control 2024 57 A
[3] Real-time optimization as a feedback control problem-A review Computers & Chemical Engineering 2022 A
[4] Real-time optimization by extremum-seeking control John Wiley & Sons 2003 USA
[5] Modifier-adaptation methodology for real-time optimization Industrial & Engineering Chemistry Research 2009 48 13 6022–6033
[6] A real-time iteration scheme for nonlinear optimization in optimal feedback control SIAM Journal on Control and Optimization 2005 43 5 1714–1736
[7] Towards robustness guarantees for feedback-based optimization Proceedings of IEEE 58th Conference on Decision and Control 2019 6207–6214
[8] Timescale Separation in Autonomous Optimization IEEE Transactions on Automatic Control 2021 66 2 611–624
[9] Linear-Convex Optimal Steady-State Control IEEE Transactions on Automatic Control 2021 66 11 5377–5384
[10] Time-varying optimization of LTI systems via projected primal-dual gradient flows IEEE Transactions on Control of Network Systems 2021 9 1 474–486
[11] Online Optimization as a Feedback Controller: Stability and Tracking IEEE Transactions on Control of Network Systems 2020 7 1 422–432
[12] Online feedback equilibrium seeking IEEE Transactions on Automatic Control 2025 70 1 203-218
[13] Feedback-based optimization with sub-Weibull gradient errors and intermittent updates IEEE Control Systems Letters 2022 6 2521–2526
[14] Online Optimization of Dynamical Systems With Deep Learning Perception IEEE Open Journal of Control Systems 2022 1 306-321
[15] Deployment of an Online Feedback Optimization Controller for Reactive Power Flow Optimization in a Distribution Grid Proceedings of IEEE PES ISGT Europe 2023
[16] Personalized optimization with user’s feedback Automatica 2021 131 109767
[17] Model-free real-time optimization of process systems using safe Bayesian optimization AIChE Journal 2023 69 4 A
[18] Violation-aware contextual Bayesian optimization for controller performance optimization with unmodeled constraints Journal of Process Control 2024 138 103212
[19] A robust event-triggered approach for fast sampled-data extremization and learning IEEE Transactions on Automatic Control 2017 62 10 4949–4964
[20] Model-free primal-dual methods for network optimization with application to real-time optimal power flow Proceedings of American Control Conference 2020 3140–3147
[21] Zeroth-order feedback optimization for cooperative multi-agent systems Automatica 2023 148 A
[22] Model-Free Nonlinear Feedback Optimization IEEE Transactions on Automatic Control 2024 69 7 4554-4569
[23] Continuous-time zeroth-order dynamics with projection maps: Model-free feedback optimization with safety guarantees IEEE Transactions on Automatic Control 2025
[24] Adaptive real-time grid operation via Online Feedback Optimization with sensitivity estimation Electric Power Systems Research 2022 212 A
[25] An Online Feedback Optimization Approach to Voltage Regulation in Inverter-Based Power Distribution Networks Proceedings of American Control Conference 2023 1868-1873
[26] Model-Free Game-Theoretic Feedback optimization Proceedings of European Control Conference 2023 1-8
[27] Online Stochastic Optimization for Unknown Linear Systems: Data-Driven Controller Synthesis and Analysis IEEE Transactions on Automatic Control 2024 69 7 4411-4426
[28] Robust Regression and Lasso IEEE Transactions on Information Theory 2010 56 7 3561-3574
[29] Theory and applications of robust optimization SIAM Review 2011 53 3 464–501
[30] Robust data-enabled predictive control: Tractable formulations and performance guarantees IEEE Transactions on Automatic Control 2023 68 5 3163–3170
[31] Selected topics in robust convex optimization Mathematical Programming 2008 112 125–158
[32] Robust Solutions to Least-Squares Problems with Uncertain Data SIAM Journal on Matrix Analysis and Applications 1997 18 4 1035–1064
[33] The Stability and Control of Discrete Processes Springer 1986 62 USA
[34] MATPOWER IEEE Transactions on Power Systems 2011 26 1 12–19
[35] UNICORN
[36] Convex Analysis and Monotone Operator Theory in Hilbert Spaces Springer 2011 USA