Estimators#
All estimators inherit from BaseEstimator
and expose a single iterate(A, b) → x interface.
Base Class#
Abstract base class for all online parameter estimators.
Every estimator in TAG-K follows the same iterate(A, b) -> theta protocol,
making them interchangeable in simulation loops and benchmarks.
- class online_estimators.estimators.base.BaseEstimator(num_params)[source]#
Bases:
ABCAbstract base class for online parameter estimators.
All estimators solve the overdetermined linear system
A theta ~= bincrementally – each call toiterate()ingests a new batch of rows and returns the current best estimate oftheta.- Parameters:
num_params (int) – Dimension of the parameter vector
theta.
- abstractmethod iterate(A, b)[source]#
Ingest a new measurement batch and return the updated estimate.
- Parameters:
A (np.ndarray, shape
(m, n)) – Measurement (regressor) matrix for this batch.b (np.ndarray, shape
(m,)) – Observation vector for this batch.
- Returns:
Updated parameter estimate
theta_hat.- Return type:
np.ndarray, shape
(n,)
Recursive Least Squares#
Recursive Least Squares (RLS) estimator with exponential forgetting.
The RLS algorithm minimises a weighted least-squares cost where older
measurements are exponentially down-weighted via a forgetting factor
lam in (0, 1]. Smaller lam discounts old data more aggressively,
yielding faster adaptation at the expense of higher variance.
- class online_estimators.estimators.rls.RLS(num_params, theta_hat=None, forgetting_factor=0.3, c=1000.0)[source]#
Bases:
BaseEstimatorRecursive Least Squares with exponential forgetting.
The update equations are:
\[\begin{split}S_k &= A_k P_{k-1} A_k^T + \lambda I \\ K_k &= P_{k-1} A_k^T S_k^{-1} \\ \hat\theta_k &= \hat\theta_{k-1} + K_k (b_k - A_k \hat\theta_{k-1}) \\ P_k &= (P_{k-1} - K_k A_k P_{k-1}) / \lambda\end{split}\]When the C++ extension
online_estimationis installed, the iterate method delegates to a compiled implementation for better performance.- Parameters:
Examples
>>> import numpy as np >>> from online_estimators.estimators import RLS >>> est = RLS(num_params=3, forgetting_factor=0.9) >>> A = np.random.randn(6, 3) >>> b = A @ np.array([1.0, 2.0, 3.0]) >>> theta = est.iterate(A, b)
Kalman Filter#
Kalman Filter for parameter estimation (flipped model).
In the “flipped-model” formulation the parameters are treated as the
hidden state and the measurements provide an observation equation
b = A theta + v. A constant-dynamics model theta_{k+1} = theta_k + w is
assumed, making this a simple parameter-tracking filter.
- class online_estimators.estimators.kf.KF(num_params, process_noise, measurement_noise, theta_hat=None, c=10.0)[source]#
Bases:
BaseEstimatorKalman Filter for online parameter estimation.
State model:
theta_{k+1} = theta_k + w_k, withw_k ~ N(0, Q)Observation:
b_k = A_k theta_k + v_k, withv_k ~ N(0, R)When the C++ extension
online_estimationis installed, the iterate method delegates to a compiled implementation for better performance.- Parameters:
num_params (int) – Dimension of the parameter vector.
process_noise (np.ndarray, shape
(n, n)) – Process noise covarianceQ.measurement_noise (np.ndarray, shape
(m, m)) – Measurement noise covarianceR. Must match the number of rows inAat each call.theta_hat (np.ndarray or None) – Initial estimate. Defaults to zero.
c (float) – Initial covariance scaling –
P_0 = c * I.
Examples
>>> import numpy as np >>> from online_estimators.estimators import KF >>> n, m = 4, 6 >>> kf = KF(n, process_noise=1e-3*np.eye(n), ... measurement_noise=1e-4*np.eye(m)) >>> A = np.random.randn(m, n) >>> b = A @ np.ones(n) + 0.01 * np.random.randn(m) >>> theta = kf.iterate(A, b)
Randomized Kaczmarz Family#
Randomized Kaczmarz (RK) and its preconditioned/augmented variants.
The basic Randomized Kaczmarz algorithm solves Ax = b by repeatedly
projecting onto randomly selected hyperplanes defined by individual rows
of A. Rows are sampled proportional to their squared norm.
This module provides:
RK– vanilla randomized KaczmarzTARK– tail-averaged RK (Polyak averaging over the second half)RK_ColScaled– column-scaled (right-preconditioned) RKRK_Equi– Ruiz-equilibrated RK (left + right preconditioning)RK_Tikh– Tikhonov-augmented RK (ridge-like regularisation)RK_EquiTikh– Ruiz equilibration + Tikhonov augmentation
- online_estimators.estimators.rk.ruiz_equilibrate(A, iters=5, eps=1e-12)[source]#
Ruiz equilibration: iterative row/column scaling.
Returns
(B, Dr, Dc)such thatB = diag(Dr) A diag(Dc)is approximately doubly-stochastic in row/column norms.- Parameters:
- Return type:
- Returns:
B (np.ndarray, shape
(m, n)) – Equilibrated matrix.Dr (np.ndarray, shape
(m,)) – Row scaling factors.Dc (np.ndarray, shape
(n,)) – Column scaling factors.
- class online_estimators.estimators.rk.RK(num_params, x0=None)[source]#
Bases:
BaseEstimatorClassic Randomized Kaczmarz.
At each call, performs
minner Kaczmarz sweeps (one per row in expectation) with row-norm-proportional sampling.When the C++ extension
online_estimationis installed, the iterate method delegates to a compiled implementation for better performance.- Parameters:
num_params (int) – Dimension of theta.
x0 (np.ndarray or None) – Initial iterate. Defaults to zero.
Examples
>>> import numpy as np >>> from online_estimators.estimators import RK >>> rk = RK(num_params=5) >>> A = np.random.randn(20, 5) >>> b = A @ np.ones(5) >>> theta = rk.iterate(A, b)
- class online_estimators.estimators.rk.TARK(num_params, x0=None, default_burnin=0)[source]#
Bases:
BaseEstimatorTail-Averaged Randomized Kaczmarz.
Identical to
RKbut returns the Polyak average of the iterates after a configurable burn-in period.When the C++ extension
online_estimationis installed, the iterate method delegates to a compiled implementation for better performance. Note that the C++ backend uses theburninvalue from construction and ignores the per-callburninargument.- Parameters:
Examples
>>> from online_estimators.estimators import TARK >>> est = TARK(num_params=3) >>> theta = est.iterate(A, b, burnin=5)
- class online_estimators.estimators.rk.RK_ColScaled(num_params, x0=None)[source]#
Bases:
BaseEstimatorRandomized Kaczmarz with column scaling (right preconditioning).
Scales columns of
Aby their standard deviation before running Kaczmarz, then maps the solution back. This is effective when columns have very different magnitudes.- Parameters:
num_params (int)
x0 (np.ndarray or None)
- class online_estimators.estimators.rk.RK_Equi(num_params, x0=None, ruiz_iters=5)[source]#
Bases:
BaseEstimatorRandomized Kaczmarz with Ruiz equilibration.
Applies iterative row/column (Ruiz) scaling before Kaczmarz to improve the condition number of
A.- Parameters:
- class online_estimators.estimators.rk.RK_Tikh(num_params, x0=None, lam=None, lam_scale=1e-05)[source]#
Bases:
BaseEstimatorRandomized Kaczmarz on the Tikhonov-augmented system.
Appends
sqrtlam * Irows and zero RHS to the system, turning Kaczmarz into an implicit ridge regression.- Parameters:
- class online_estimators.estimators.rk.RK_EquiTikh(num_params, x0=None, ruiz_iters=5, lam=None, lam_scale=0.001)[source]#
Bases:
BaseEstimatorRK with both Ruiz equilibration and Tikhonov augmentation.
Combines the benefits of condition-number improvement (Ruiz) and regularisation (Tikhonov).
- Parameters:
Greedy Randomized Kaczmarz Family#
Greedy Randomized Kaczmarz (GRK) and its variants.
GRK improves upon vanilla Randomized Kaczmarz by selecting rows with
large residual-to-norm ratios, concentrating updates where they reduce
the error most. An adaptive threshold eps_k is computed each iteration
to define the “greedy” working set.
Variants in this module:
GRK– base greedy RKTAGK– GRK + Polyak tail-averagingGRK_Tikh– GRK on the Tikhonov-augmented systemTAGK_Tikh– GRK + Tikhonov + tail-averaging
- class online_estimators.estimators.grk.GRK(num_params, x0=None, default_tol=0.0001)[source]#
Bases:
BaseEstimatorGreedy Randomized Kaczmarz (GRK).
Selects rows whose per-row residual is large relative to the global residual, then samples from this “greedy set” with residual-proportional probabilities.
When the C++ extension
online_estimationis installed, the iterate method delegates to a compiled implementation for better performance. The C++ backend does not support then_iters,x0,tol, orrngarguments – those calls fall back to pure Python automatically.- Parameters:
References
Bai, Z. & Wu, W. “On greedy randomized Kaczmarz method for solving large sparse linear systems.” SIAM J. Sci. Comput., 40(1), 2018.
Examples
>>> from online_estimators.estimators import GRK >>> grk = GRK(num_params=10) >>> theta = grk.iterate(A, b)
- class online_estimators.estimators.grk.TAGK(num_params, x0=None, burnin=0, default_tol=0.0001)[source]#
Bases:
BaseEstimatorGreedy RK with Polyak tail-averaging.
Same greedy row selection as
GRK, but returns the average of iterates collected after a burn-in phase.When the C++ extension
online_estimationis installed and no per-call overrides are requested, the iterate method delegates to the C++TAGKimplementation for better performance.- Parameters:
- class online_estimators.estimators.grk.GRK_Tikh(num_params, x0=None, lam=None, lam_scale=1e-05)[source]#
Bases:
BaseEstimatorGreedy RK on the Tikhonov-augmented system.
Appends
sqrtlam * Irows toAfor implicit regularisation, then applies greedy row selection.
- class online_estimators.estimators.grk.TAGK_Tikh(num_params, x0=None, lam=None, lam_scale=0.001)[source]#
Bases:
BaseEstimatorGreedy RK + Tikhonov augmentation + tail-averaging.
This is the TAG-K algorithm: combines greedy row selection, Tikhonov regularisation, and Polyak tail-averaging for robust online parameter estimation.
Block Methods#
Block and specialised Kaczmarz variants.
FDBK– Fast Deterministic Block KaczmarzIGRK– Improved Greedy Randomized KaczmarzMGRK– Momentum-accelerated Greedy RKREK– Randomized Extended Kaczmarz (handles inconsistent systems)
- class online_estimators.estimators.block.FDBK(num_params, x0=None)[source]#
Bases:
BaseEstimatorFast Deterministic Block Kaczmarz (FDBK).
At each iteration, constructs a block of greedy rows (those with large per-row residuals) and performs a single combined projection. This can converge faster than single-row methods on well-structured systems.
- Parameters:
num_params (int)
x0 (np.ndarray or None)
References
Necoara, I. “Faster randomized block Kaczmarz algorithms.” SIAM J. Matrix Anal. Appl., 40(4), 2019.
- class online_estimators.estimators.block.IGRK(num_params, x0=None)[source]#
Bases:
BaseEstimatorImproved Greedy Randomized Kaczmarz.
Uses a stricter greedy threshold (based on crit-max + Frobenius contribution) and runs
m*ninner iterations per call.- Parameters:
num_params (int)
x0 (np.ndarray or None)
- class online_estimators.estimators.block.MGRK(num_params, x0=None, alpha=1.0, beta=0.25, theta=0.5)[source]#
Bases:
BaseEstimatorMomentum-accelerated Greedy Randomized Kaczmarz.
Adds a heavy-ball momentum term
beta(x_k - x_{k-1})to the standard Kaczmarz update for faster convergence on ill-conditioned systems.- Parameters:
- class online_estimators.estimators.block.REK(num_params, x0=None)[source]#
Bases:
BaseEstimatorRandomized Extended Kaczmarz.
Simultaneously projects onto both row and column spaces of
A, allowing convergence even for inconsistent systems (whereb not in range(A)).- Parameters:
num_params (int)
x0 (np.ndarray or None)
References
Zouzias, A. & Freris, N. “Randomized extended Kaczmarz for solving least squares.” SIAM J. Matrix Anal. Appl., 34(2), 2013.
C++ Backend Loader#
Optional C++ backend for estimators.
If the online_estimation C++ extension is installed, estimator classes
will delegate to compiled implementations for better performance. The
extension is built from the cpp/ directory via pip install ./cpp.
When the extension is not available, all estimators fall back to their pure-Python implementations transparently.