If you see this, something is wrong
To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.
Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.
Generally speaking, anything that is blue is clickable.
Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.
Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.
Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.
By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.
If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.
First published on Friday, Jul 25, 2025 and last modified on Friday, Jul 25, 2025 by François Chaplais.
Computer, Electrical and Mathematical Science and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia Email
Computer, Electrical and Mathematical Science and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia Email
Computer, Electrical and Mathematical Science and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia Email
Taous Meriem Laleg-Kirati is affiliated with the National Institute for Research in Digital Science and Technology, Paris-Saclay, France Email
Contraction analysis offers, through elegant mathematical developments, a unified way of designing observers for a general class of nonlinear systems, where the observer correction term is obtained by solving an infinite dimensional inequality that guarantees global exponential convergence. However, solving the matrix partial differential inequality involved in contraction analysis design is both analytically and numerically challenging and represents a long-lasting challenge that prevented its wide use. Therefore, the present paper proposes a novel approach that relies on an unsupervised Physics Informed Neural Network (PINN) to design the observer’s correction term by enforcing the partial differential inequality in the loss function. The performance of the proposed PINN-based nonlinear observer is assessed in numerical simulation as well as its robustness to measurement noise and neural network approximation error.
Nonlinear observer design is a fundamental research area in control theory that is constantly attracting attention from researchers in the community. While general and systematic frameworks for state estimation of linear systems with global convergence guarantees are well-established in the literature [1, 2], nonlinear observer design still suffers from a lack of generality and global convergence guarantees.
The literature abounds with various nonlinear observer methods such as high gain observers [3], immersion and invariance-based observers [4], observers based on geometric methods [5], those based on Linear Matrix Inequalities (LMI)s [6], algebraic estimators [7], approaches relying on an injective transformation into a larger latent space [8], and the well-known Extended Kalman Filter [9]. However, most of the above-mentioned observer designs rely heavily on the class of nonlinearity of the system or provide only local convergence guarantees. On the other hand, observer design based on contraction analysis is slowly regaining popularity since its first introduction in the late nineties in [10]. They offer a generic design framework that is suitable for a general class of smooth nonlinear systems while providing global exponential convergence guarantees.
Contraction analysis was introduced in [11], drawing on concepts from continuum mechanics and differential geometry. It offers a distinct perspective on stability analysis compared to traditional methods like Lyapunov theory [12, 13, 14]. Rather than focusing on the convergence of a system to a target trajectory or equilibrium point, contraction analysis assesses stability by determining whether all trajectories of a system converge toward one another. In essence, a system is considered contracting if it eventually "forgets" its initial conditions or any temporary disturbances, making this approach particularly well-suited for observer design. Indeed, the initial motivation behind contraction analysis was primarily related to the design of observers for nonlinear systems [15]. Numerous contraction-based nonlinear controllers and observers have since been proposed in the literature such as the work in [10, 16, 17, 11, 18, 19, 15]. Contraction analysis provides a systematic framework for designing nonlinear observers, leveraging sophisticated mathematical developments that ensure global exponential convergence. The observer’s correction term is determined by solving a Matrix Partial Differential Inequality (MPDI). However, despite the theoretical elegance of this approach, solving the MPDI presents significant analytical and numerical challenges that limit the use and implementation of contraction-based observers in their original form [20]. Therefore, we aim in the present paper to overcome this challenge by proposing a learning-based approach to design the contraction-based nonlinear observer’s gain satisfying the MPDI.
Artificial Neural Networks (ANNs) have emerged as powerful tools in approximating solutions to ordinary and partial differential equations (ODEs and PDEs). By leveraging their ability to approximate complex nonlinear functions, neural networks can be trained to satisfy the differential equations governing physical phenomena directly. This approach, often referred to as Physics-Informed Neural Networks (PINNs), was pioneered by Raissi et al. [21], and incorporates the underlying physical laws into the learning process, allowing the network to learn solutions that are consistent with the known physics. The advent of automatic differentiation has further revolutionized this field by enabling the efficient computation of derivatives required in the training process. This capability facilitates the direct incorporation of differential equations into the neural network’s loss function, ensuring that the learned solutions not only fit the data but also satisfy the physical laws described by the differential equations.
Physics-Informed Neural Networks (PINNs) have been extensively applied in modeling and parameter estimation of nonlinear dynamical systems. For instance, in [22], a Runge–Kutta-based PINN framework was proposed for parameter estimation and modeling of nonlinear systems where the physical loss improves the integral involved in the Runge-Kutta method, reducing the error in the learnable trajectories. From the perspective of this work, PINNs have been utilized in observer design for discrete-time and continuous-time nonlinear systems. The work in [23] introduced a PINN-based approach for designing nonlinear discrete-time observers, showing improved performance in state estimation without the need for explicit system models. Moreover, learning-based methods have been employed to design the KKL observer for autonomous nonlinear systems, as presented in [24, 25], and nonlinear systems subject to measurement delay [26, 27], where neural networks were trained to approximate a forward and inverse map involved in KKL observer design. The work in [28, 29] has shown that PINNs are effective in model predictive control, addressing practical challenges in the oil and gas industry, and accurately identified nonlinear dynamical systems. These findings emphasize the versatility of PINNs in system identification and observer design by embedding physical laws directly within neural network architectures, resulting in models that are both data-efficient and physically consistent, achieving robust generalization even for time horizons far beyond the training or operating points and maintaining resilience against additive noise.
In the present paper, we propose an unsupervised learning approach to design a nonlinear observer for a general class of nonlinear systems based on the contraction analysis. The proposed approach relies on a Physics Informed Neural Network to enforce the contraction condition in the learning process. Based on the MDPI involved in the gain design of contraction-based nonlinear observer, we formulate an optimization problem by strategically designing the cost function of the Physics Informed Neural Network. Furthermore, we establish the robustness of the proposed learning-based nonlinear observer to the neural network approximation error and measurement noise and derive conditions ensuring exponential input-to-state stability.
The present paper is organized as follows: Section 2 gives some background on contraction analysis and contraction-based nonlinear observers. The problem addressed in this paper is formulated in section 3. Subsequently, we present the proposed unsupervised learning approach to design the observer’s gain in section 4, and analyze the robustness of the designed observer to the neural network approximation error and measurement noise in section 5. The performance and robustness of the proposed observer is evaluated through two numerical examples in 6. Finally, a summary of the contributions and future work directions are provided in section 7.
Notation.
For a square matrix \( M\) , \( \operatorname{He}\{M\}=\frac{1}{2}(M+M^T)\) is the Hermitian Part of the matrix \( M\) . The class \( \mathcal{C}^1\) is the class of continuous functions with continuous first derivatives. The Euclidean norm of a vector \( u\) is denoted by \( \left\|{u}\right\|\) .
Inspired by fluid mechanics and differential geometry, Contraction Analysis offers an alternative way of studying stability. Usually, stability is studied with respect to a nominal trajectory or an equilibrium point. Instead, contraction analysis studies the convergence of the solutions of a dynamical system to each other. In other words, the system is considered stable if the system’s final behavior is independent of the initial condition [11]. A central result that was derived in [10, 16] is that if all neighboring trajectories converge to each other, then the system is globally exponentially stable and all trajectories converge to a single one.
We consider the following general autonomous nonlinear system
(1)
where \( x(t) \in \mathbb{R}^n\) is the state, \( y(t) \in \mathbb{R}^p\) is the output, \( f: \mathbb{R}^n \rightarrow \mathbb{R}^n\) and \( h: \mathbb{R}^n \rightarrow \mathbb{R}^p\) are smooth vector fields. The main result of [11] is that if there exists a bounded symmetric and positive definite matrix \( P \in \mathbb{R}^{n \times n}\) , such that
(2)
Then, system (1) is contracting with rate \( \lambda\) . Furthermore, this result is considered to be a generalization and strengthening of the Krasovskii’s global convergence theorem [30].
One of the main features of contraction analysis is the ability to conclude about a system’s stability independently of the knowledge of any predefined nominal trajectory, which is a particularly attractive feature for observer design. Contraction analysis offers, therefore, a universal way of designing observers for nonlinear systems as in (1), provided that the system is differentially detectable, which is a necessary condition for the existence of an exponentially stable observer of the form [31]:
(3)
where \( \hat{x} \in \mathbb{R}^n\) is the estimated state, \( \hat{y} \in \mathbb{R}^p\) is the estimated output, and \( k: \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}^n \) is the observer’s nonlinear correction term. The correction term \( k\) and the nonlinear function \( f\) are assumed to be of at least class \( \mathcal{C}^1\) .
The following theorem provides an approach to designing the correction term \( k\) based on the contraction analysis.
Theorem 1
[31] Consider the smooth nonlinear system (1). If there exists a positive definite matrix \( P\in \mathbb{R}^{n\times n}\) , a \( \mathcal{C}^1\) function \( k: \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}^n \) and a real positive number \( \lambda\) , such that
(4)
Then the observer in (3) is a globally exponentially stable observer for system (1).
While Theorem 1 provides a general approach for designing nonlinear observers, determining the correction term involves solving a matrix partial differential inequality, which is highly challenging both analytically and numerically, and to the best of our knowledge, no existing numerical solvers are specifically equipped to handle this class of problems. To overcome this limitation, we exploit the universal approximation principle of neural networks to numerically approximate the observer’s correction term. We specifically rely on an unsupervised Physics-Informed Neural Network approach that enforces the contraction conditions of Theorem 1 to learn the observer’s gain.
Moreover, although the observer in (3) satisfying (4) is global, using a neural network requires training on a closed set of interests. Therefore, we consider in the remainder of the paper, systems in the form of (1) that satisfy the following assumption
Assumption 1
System (1) is forward invariant within \( \mathcal{X}\) i.e. there exist a compact set \( \mathcal{X} \subset \mathbb{R}^n\) , such that for all initial conditions \( x(0)\in \mathcal{X}_0 \subset \mathcal{X}\) and all \( t >0, X\left(t,x_0\right) \in \mathcal{X}\) and \( Y\left(t,x_0\right) \in \mathcal{Y}\) . Where \( X\left(t,x_0\right)\) is the solution of (1) at time \( t\) , initialized at \( x(0)=x_0\) , and \( Y\left(t,x_0\right)\) the corresponding output.
Remark 1
To reduce the complexity of the MDPI in (4), we fix the matrix \( P\) and solve (4) for \( k\) , instead of solving for both \( P\) and \( k\) . Therefore, we consider in the rest of the paper \( P=I\) , where \( I\) is the \( n \times n\) identity matrix.
In the present section, we present the proposed Physics-Informed Neural Network approach to learn the gain of the observer in (3). Since the goal is to learn the correction term of the observer, we do not have pre-calculated trajectories for the estimated state \( \hat{x}\) . Therefore, we propose an unsupervised PINN-based approach to learn the observer’s gain by leveraging the contraction analysis and enforcing the conditions in Theorem 1 to guarantee the exponential stability of the learned observer.
Let \( \hat{k}_\theta: \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}^n \) be the learned observer gain given by
(5)
where \(T_{\theta}\) is the function parameterized by the neural networks and \( \theta\) represents the weights and biases. The block diagram of the proposed learning-based nonlinear observer is depicted in Fig. 1. By integrating this contraction condition into the neural network training, we effectively regularize the model using the physics knowledge and constraints of the observer, which allows the neural network to learn the observer’s correction term solely from the contraction condition, reducing, therefore, the need for large datasets. Therefore, we formulate the contraction condition of Theorem 1 into an optimization problem and derive a loss function that enforces this condition.
The dataset used for training as well as the loss function used in the training process are detailed in the following subsections.
Since we do not rely on data points for the training, we generate collocation points and construct a physics dataset \( D=\{\hat{x}^{(j)}, y^{(j)}\}\) to minimize the physics loss by uniformly sampling \( (\mathcal{X},\mathcal{Y})\) , with \( j=1,...,N\) representing the sample number and \( N\) the maximum number of collocation points.
Instead of minimizing a loss function associated with data trajectories plus a physics term, we focus exclusively on a physics-based loss function. The loss function is formulated to penalize deviations from the contraction condition across the domain of interest, and is a combination of the contraction condition and the boundary condition loss functions:
(6)
where \( \mathcal{L}_{\text{MPDI}} \) and \( \mathcal{L}_{\text{BC}}\) represent respectively the contraction condition loss function and the boundary condition loss function, and \( \mu_1\) and \( \mu_2\) are weighting coefficients.
The primary loss function is designed to enforce the contraction condition, ensuring that the system’s differential dynamics satisfies the required inequality (4) which becomes
(7)
The above Matrix Partial Differential Inequality (MDPI) is expressed in a negative semi-definiteness sense. Since \( D(\hat{x},y)\) is symmetric, we exploit the properties of its principle minors to impose the negative semi-definiteness condition in (7) by recalling the following Sylvester’s criterion.
Theorem 2
[32] A symmetric matrix \( D\in\mathbb{R}^{n\times n},\) is negative semi-definite if and only if \( (-1)^i \Delta_i \geq 0 \) for all leading principal minors \( \Delta_i \), where \( i = 1, 2, \dots, n \).
Therefore, \( \mathcal{L}_{\text{MPDI}}\) is constructed by penalizing the leading principal minors of (7) as follows
(8)
where \( \rho_i\) are weighting coefficients for the principle minors loss function \( l_i\) given by
with \( \Delta _i(\hat{x}^{(j)},y^{(j)})\) representing the leading principle minors of \( D(\hat{x}^{(j)},y^{(j)})\) , for \( i=1,...,n\) .
The loss function for the boundary condition is constructed by considering that if the second argument of the gain \( \hat{k}_\theta\) is equal to the output function of its first argument, then the observer’s trajectory is aligned with the trajectory of the system, and the observer’s gain is identically zero. One way to enforce this condition is to consider the following loss function:
(9)
Training the PINN requires computing the loss function in (6), which in turn requires computing the jacobian of \( f\) and \( \hat{k}_\theta\) . Since the description of \( f\) is given by the model, the Jacobian \( J_x = \frac{\partial f}{\partial \hat{x}}\) is computed analytically to avoid any numerical differentiation errors. On the other hand, the jacobian of the gain \( \frac{\partial \hat{k}_\theta}{\partial \hat{x}}\) is computed numerically by exploiting the actual automatic differential frameworks behind neural networks optimization. The training algorithm is provided upon request.
In this section, we study the effect of the neural network approximation error and the measurement noise on the convergence of the observer by considering the following structure
(10)
with \( y_e\) and \( \hat{k}_{\theta}\) representing respectively the noisy output and learned observer gain, which is given by
(11)
(12)
where \( v\) is the measurement noise and \( \varepsilon\) is the neural network approximation error.
To study the robustness of the proposed PINN-based contraction nonlinear observer, we consider the following assumptions:
Assumption 2
The learned observer gain \( \hat{k}_\theta\) is Lipschitz on its second argument, uniformly in \( \hat{x}\)
(13)
with \( L>0\)
Assumption 3
There exists two positive and bounded constants \( \bar{v}\) and \( \bar{\epsilon}\) such that the measurement noise \( v(t)\) and the neural network approximation error \( \varepsilon\) satisfy
Assumption 2 can be easily satisfied by selecting a Lipschitz activation function for the neural network \( T_\theta\) . Moreover, since the states and the output of the system are bounded as per Assumption 1, the approximation error \( \varepsilon\) is bounded.
Theorem 3
Consider system (1) with the noisy output in (11), and the observer in (10). Let assumptions 2 and 3 hold. Let \( \lambda\) be the contraction rate in (4) for \( P=I\) . If there exists a strictly positive constant \( \eta\) and \( \lambda>2\) then the estimation error is exponentially input to state stable and satisfies:
(14)
Proof
Consider the following Lyapunov function
\( V=\frac{1}{2}\left\|{x-\hat{x}}\right\|^2\)
(15)
Using Young’s Inequality, (15) becomes
(16)
Substituting (13) in (16), one obtains
(17)
Using the identity in [31, see Theorem 4.3 p. 231] for \( y=h(x)\) , one obtains
(18)
Finally, the exponential input to state stability is ensured if \( \lambda > 2\) and the estimation error is given by (14) for \( \eta=\lambda -2\) .
Theorem 3 indicates that increasing the contraction rate \( \lambda\) can further mitigate the impact of the neural network approximation errors and measurement noise on the estimation. However, a higher contraction rate reduces the likelihood of finding a gain \( k\) that satisfies the MPDI.
To evaluate the performance and disturbance and noise rejection of the proposed learning-based contraction nonlinear observer, we perform numerical simulations on two nonlinear systems: nonlinear Van der Pol and reverse Duffing oscillators given by (19) and (20), respectively.
(19)
(20)
A training dataset of 4000 samples was generated by uniformly sampling the region of interest \( (\mathcal{X}, \mathcal{Y})\) , where \( (\mathcal{X}, \mathcal{Y})=([-2,2] \times[-3,3], [-2, 2])\) for the Van der Pol oscillator and \( (\mathcal{X}, \mathcal{Y})=([-1,1]^2, [-1, 1])\) for the reverse duffing. The considered architecture for both systems is a multi-layer perceptron of 5 hidden layers with 30 neurons each. The neural network was trained using the Adam optimizer [33] with a learning rate of \( \alpha= 10^{-3}\) for 500 iterations followed by the same amount of iterations for the L-BFGS optimization algorithm [34] with \( \beta=1\) , which aligns with the methodologies reported in the literature that show better performance for training physics-based deep learning models [21, 35, 36, 37, 38, 28, 29]. The training was performed on a single NVIDIA A40 GPU.
A contraction rate \( \lambda\) of 2.5 is considered for both systems and the loss functions’ weights for both systems are provided in Table 1.
| Weights | \( \mu_1\) | \( \mu_2\) | \( \rho_1\) | \( \rho_2\) |
| Van der Pol | \( 10^{-3}\) | 1 | 1 | \( 10^{-1}\) |
| Reverse Duffing | 1 | 1 | 1 | 1 |
The simulation results are depicted in Fig.2 and 3 under a white Gaussian noise of \( 0.15\) magnitude. The Van der Pol oscillator was initialized at [-1, 2.5], the reverse duffing was initialized at [-0.5; 0.5], and the observer for both systems was initialized at the origin. One can see that the proposed learning-based contraction nonlinear observer was able to accurately estimate the states of both systems (19) and (20) and demonstrated good noise rejection. Furthermore, one can see that the estimation error stays bounded within a neighborhood of the origin defined by the approximation error of the neural network and the noise level. To further assess the robustness of the proposed observer to measurement noise, the PINN was trained for several values of the contraction rate \( \lambda\) , and the observer was simulated under the same level of measurement noise for \( \hat{x}(0)=x(0)\) . The results in Table \( 2\) confirm the findings of Theorem 3 and show that increasing the contraction rate reduces the effect of the measurement noise on the estimation error.
| \( \lambda\) | \( 2.5\) | \( 4\) | \( 5\) |
| Van der Pol | \( 0.53\) | \( 0.47\) | \( 0.22\) |
| Reverse Duffing | \( 0.062\) | \( 0.016\) | \( 0.013\) |
The present paper addressed a long-lasting drawback of contraction-based nonlinear observer design, by proposing an unsupervised learning-based approach to design the nonlinear observer’s gain. The proposed approach relies on a physics-informed neural network that enforces the contraction condition to better approximate the gain of an exponentially stable observer for an autonomous nonlinear system. The effect of the neural network approximation error and measurement noise was studied, and conditions for ensuring exponential input-to-state stability were derived. The proposed learning-based contraction nonlinear observer demonstrated good performance and noise rejection in numerical simulation. Future work focuses on extending the proposed approach to nonlinear non-autonomous systems.
Research reported in this publication was supported by King Abdullah University of Science and Technology (KAUST) with the Base Research Fund (BAS/1/1627-01-01) and (BAS/1/1665-01-01), and the National Institute for Research in Digital Science and Technology (INRIA). The authors would also like to thank Ibrahima N’doye for the fruitful discussions that led to this paper.
[1] Observing the State of a Linear System IEEE Transactions on Military Electronics 1964 8 2 74-80
[2] A New Approach to Linear Filtering and Prediction Problems Journal of Basic Engineering 1960 82 1 35-45 03
[3] High-gain observers in nonlinear feedback control 2008 International Conference on Control, Automation and Systems 2008 xlvii-lvii 10.1109/ICCAS.2008.4694705
[4] Invariant manifold based reduced-order observer design for nonlinear systems IEEE Transactions on Automatic Control 2008 53 11 2602–2614
[5] Linearization by output injection and nonlinear observers Systems & Control Letters 1983 3 1 47-52
[6] Observers for Lipschitz nonlinear systems IEEE Transactions on Automatic Control 1998 43 3 397-401 10.1109/9.661604
[7] Non-asymptotic neural network-based state and disturbance estimation for a class of nonlinear systems using modulating functions 2023 American Control Conference 2023
[8] Nonlinear observer design using Lyapunov's auxiliary theorem Proceedings of the 36th IEEE Conference on Decision and Control 1997 5 4802-4807 vol.5
[9] Optimal Filtering Prentice-Hall 1979 Information and system sciences series
[10] On metric observers for nonlinear systems Proceeding of the 1996 IEEE International Conference on Control Applications 1996 320-326 10.1109/CCA.1996.558742
[11] On Contraction Analysis for Non-linear Systems Automatica 1998 34 6 683-696
[12] Nonlinear dynamical control systems Springer-Verlag 1990 Berlin, Heidelberg
[13] Nonlinear Control Systems Springer London 1995 Communications and Control Engineering
[14] Nonlinear Systems Prentice Hall 2002 Pearson Education
[15] Contracting nonlinear observers: Convex optimization and learning from data 2018 annual American Control Conference (ACC) 2018 1873–1880 IEEE
[16] Applications of metric observers for nonlinear systems Proceeding of the 1996 IEEE International Conference on Control Applications 1996 367-372 10.1109/CCA.1996.558805
[17] Simple observers for Hamiltonian systems Proceedings of the 1997 American Control Conference (Cat. No.97CH36041) 1997 5 2748-2753 vol.5 10.1109/ACC.1997.611955
[18] Convergence of Nonlinear Observers on \( \mathbb{R}^{n}\) With a Riemannian Metric (Part I) IEEE Transactions on Automatic Control 2011 57 7 1709–1722
[19] Observer design for stochastic nonlinear systems via contraction-based incremental stability IEEE Transactions on Automatic Control 2014 60 3 700–714
[20] Peter Giesl, Sigurdur Hafstein, and Christoph Kawan. Review on contraction analysis and computation of contraction metrics. Journal of Computational Dynamics, 10(1):1–47, 2023.
[21] Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations Journal of Computational physics 2019 378 686–707
[22] Parameter estimation and modeling of nonlinear dynamical systems based on Runge–Kutta physics-informed neural network Nonlinear Dynamics 2023 111 22 21117–21130
[23] Nonlinear discrete-time observers with physics-informed neural networks Chaos, Solitons & Fractals 2024 186 115215
[24] Learning-based design of Luenberger observers for autonomous nonlinear systems 2023 American Control Conference (ACC) 2023 3048–3055 IEEE
[25] Deep learning-based luenberger observer design for discrete-time nonlinear systems 2021 60th IEEE Conference on Decision and Control (CDC) 2021 4370–4375 IEEE
[26] Yasmine Marani, Ibrahima N'Doye, and Taous Meriem Laleg-Kirati. Deep-learning based design of cascade observers for discrete-time nonlinear systems with output delay. IFAC-PapersOnLine, 56(2):9869–9874, 2023. 22nd IFAC World Congress.
[27] Yasmine Marani, Ibrahima N'Doye, and Taous Meriem Laleg-Kirati. Deep-learning based kkl chain observer for discrete-time nonlinear systems with time-varying output delay. Automatica, 171:111955, 2025.
[28] Physics-informed neural nets for control of dynamical systems Neurocomputing 2024 579 127419
[29] Physics-Informed Neural Networks with skip connections for modeling and control of gas-lifted oil wells Applied Soft Computing 2024 158 111603
[30] Stability of Motion Springer 1967
[31] Pauline Bernard, Vincent Andrieu, and Daniele Astolfi. Observer design for continuous-time dynamical systems. Annual Reviews in Control, 53:224–248, 2022.
[32] Positive definite matrices and Sylvester's criterion The American Mathematical Monthly 1991 98 1 44–46
[33] Adam: A method for stochastic optimization arXiv preprint arXiv:1412.6980 2014
[34] A limited memory algorithm for bound constrained optimization SIAM Journal on scientific computing 1995 16 5 1190–1208
[35] Challenges in training PINNs: A loss landscape perspective arXiv preprint arXiv:2402.01868 2024
[36] Physics-informed neural networks for solving forward and inverse flow problems via the Boltzmann-BGK formulation Journal of Computational Physics 2021 447 110676
[37] A mixed pressure-velocity formulation to model flow in heterogeneous porous media with physics-informed neural networks Advances in Water Resources 2023 181 104564
[38] Improved training of physics-informed neural networks with model ensembles 2023 International Joint Conference on Neural Networks (IJCNN) 2023 1–8 IEEE