Control#
Control laws for stabilisation and tracking.
LQR Controller#
Discrete-time infinite-horizon LQR controller.
Solves the discrete algebraic Riccati equation (DARE) to compute an
optimal state-feedback gain K for a linear system x_{k+1} = A x_k + B u_k.
- online_estimators.control.lqr.dlqr(A, B, Q, R)[source]#
Compute the discrete-time LQR gain.
Solves the DARE and returns the optimal gain
Ksuch that the control lawu = -K xminimises the infinite-horizon costSum (x^TQ x + u^TR u).- Parameters:
A (np.ndarray, shape
(n, n)) – System dynamics matrix.B (np.ndarray, shape
(n, m)) – Input matrix.Q (np.ndarray, shape
(n, n)) – State cost matrix (positive semi-definite).R (np.ndarray, shape
(m, m)) – Input cost matrix (positive definite).
- Return type:
- Returns:
K (np.ndarray, shape
(m, n)) – Optimal feedback gain.P (np.ndarray, shape
(n, n)) – Solution to the DARE.
- class online_estimators.control.lqr.LQRController(A, B, Q, R)[source]#
Bases:
objectInfinite-horizon discrete LQR controller.
Implements
u = -K xwhereKis the DARE-optimal gain.- Parameters:
A (np.ndarray, shape
(n, n))B (np.ndarray, shape
(n, m))Q (np.ndarray, shape
(n, n))R (np.ndarray, shape
(m, m))
Examples
>>> import numpy as np >>> from online_estimators.control import LQRController >>> A = np.eye(2) + 0.01 * np.array([[0, 1], [-1, 0]]) >>> B = np.array([[0], [0.01]]) >>> Q = np.eye(2) >>> R = np.eye(1) >>> ctrl = LQRController(A, B, Q, R) >>> x = np.array([0.5, 0.1]) >>> u = ctrl.control(x)