TikhonovDecoupledSolver#

class TikhonovDecoupledSolver(regularizer, method: str = 'lstsq', cond: float = None, lapack_driver: str = None)[source]#

Solve \(r\) independent \(2\)-norm ordinary least-squares problems, each with the same data matrix but a different Tikhonov regularization.

That is, for \(i = 1, \ldots, r\), construct \(\Ohat\) by solving

\[\argmin_\Ohat\|\D\Ohat\trp - \Z\trp\|_F^2 + \|\bfGamma\Ohat\trp\|_F^2\]

where \(\ohat_i\) and \(\z_i\) are the \(i\)-th rows of \(\Ohat\) and \(\Z\), respectively, with corresponding symmetric-positive-definite regularization matrix \(\bfGamma_i\).

This is equivalent to solving the following stacked least-squares problems:

\[\begin{split}\argmin{\Ohat} \left\| \left[\begin{array}{c}\D \\ \bfGamma_i\end{array}\right]\Ohat\trp - \left[\begin{array}{c}\Z\trp \\ \0\end{array}\right] \right\|_F^2, \quad i = 1, \ldots, r.\end{split}\]

The exact solution of the \(i\)-th problem is described by the normal equations:

\[(\D\trp\D + \bfGamma_i\trp\bfGamma_i)\ohat_i = \D\trp\z_i.\]
Parameters:
regularizerlist of r (d, d) or (d,) ndarrays

Symmetric semi-positive-definite regularization matrices \(\bfGamma_1,\ldots,\bfGamma_r\). If the i``th entry of ``regularizer is a one-dimensional array, it is intepreted as the diagonal entries of \(\bfGamma_i\). Here, d is the number of columns in the data matrix.

methodstr

Strategy for solving the regularized least-squares problem. Options:

  • "lstsq": solve the stacked least-squares problem via scipy.linalg.lstsq(); by default, this computes and uses the singular value decomposition of the stacked data matrix \([~D\trp~~\bfGamma\trp~]\trp\).

  • "normal": directly solve the normal equations \((\D\trp\D + \bfGamma\trp\bfGamma) \Ohat\trp = \D\trp\Z\trp\) via scipy.linalg.solve().

condfloat or None

Cutoff for ‘small’ singular values of the data matrix, see scipy.linalg.lstsq(). Ignored if method = "normal".

lapack_driverstr or None

Which LAPACK driver is used to solve the least-squares problem, see scipy.linalg.lstsq(). Ignored if method = "normal".

Properties:
d#

Number of unknowns in each row of the operator matrix (number of columns of \(\D\) and \(\Ohat\)).

data_matrix#

\(k \times d\) data matrix \(\D\).

k#

Number of equations in the least-squares problem (number of rows of \(\D\) and number of columns of \(\Z\)).

lhs_matrix#

\(r \times k\) left-hand side data \(\Z\).

method#

Strategy for solving the regularized least-squares problem, either "lstsq" (default) or "normal".

options#

Keyword arguments for scipy.linalg.lstsq().

r#

Number of operator matrix rows to learn (number of rows of \(\Z\) and \(\Ohat\))

regularizer#

Symmetric semi-positive-definite regularization matrices \(\bfGamma_1,\ldots,\bfGamma_r\), one for each row of the operator matrix.

Methods:

cond

Compute the \(2\)-norm condition number of the data matrix \(\D\).

copy

Make a copy of the solver.

fit

Verify dimensions and precompute quantities in preparation to solve the least-squares problem.

get_operator_regularizer

Construct a regularizer so that each operator is regularized separately.

load

Load a serialized solver from an HDF5 file, created previously from the save() method.

posterior

Solve the Bayesian operator inference regression, constructing the means and inverse covariances of probability distributions for the rows of an operator matrix posterior.

regcond

Compute the \(2\)-norm condition number of each regularized data matrix \([~\D\trp~~\bfGamma_i\trp~]\trp,~~i = 1, \ldots, r\).

regresidual

Compute the residual of the regularized regression objective for each row of the given operator matrix.

residual

Compute the residual of the \(2\)-norm regression objective for each row of the given operator matrix.

save

Serialize the solver, saving it in HDF5 format.

solve

Solve the Operator Inference regression.

verify

Verify the solver.