Control#

Control laws for stabilisation and tracking.

LQR Controller#

Discrete-time infinite-horizon LQR controller.

Solves the discrete algebraic Riccati equation (DARE) to compute an optimal state-feedback gain K for a linear system x_{k+1} = A x_k + B u_k.

online_estimators.control.lqr.dlqr(A, B, Q, R)[source]#

Compute the discrete-time LQR gain.

Solves the DARE and returns the optimal gain K such that the control law u = -K x minimises the infinite-horizon cost Sum (x^TQ x + u^TR u).

Parameters:
  • A (np.ndarray, shape (n, n)) – System dynamics matrix.

  • B (np.ndarray, shape (n, m)) – Input matrix.

  • Q (np.ndarray, shape (n, n)) – State cost matrix (positive semi-definite).

  • R (np.ndarray, shape (m, m)) – Input cost matrix (positive definite).

Return type:

tuple[ndarray, ndarray]

Returns:

  • K (np.ndarray, shape (m, n)) – Optimal feedback gain.

  • P (np.ndarray, shape (n, n)) – Solution to the DARE.

class online_estimators.control.lqr.LQRController(A, B, Q, R)[source]#

Bases: object

Infinite-horizon discrete LQR controller.

Implements u = -K x where K is the DARE-optimal gain.

Parameters:
  • A (np.ndarray, shape (n, n))

  • B (np.ndarray, shape (n, m))

  • Q (np.ndarray, shape (n, n))

  • R (np.ndarray, shape (m, m))

Examples

>>> import numpy as np
>>> from online_estimators.control import LQRController
>>> A = np.eye(2) + 0.01 * np.array([[0, 1], [-1, 0]])
>>> B = np.array([[0], [0.01]])
>>> Q = np.eye(2)
>>> R = np.eye(1)
>>> ctrl = LQRController(A, B, Q, R)
>>> x = np.array([0.5, 0.1])
>>> u = ctrl.control(x)
update_linearized_dynamics(A, B)[source]#

Recompute the gain for updated linearised dynamics.

Parameters:
  • A (np.ndarray, shape (n, n))

  • B (np.ndarray, shape (n, m))

Return type:

None

control(x)[source]#

Compute the control input u = -K x.

Parameters:

x (np.ndarray, shape (n,) or (n, 1))

Returns:

Control vector u.

Return type:

np.ndarray