opinf.operators#

Operator classes for the individual terms of a reduced-order model.

Overview#

Reduced-order models based on Operator Inference are systems of ordinary differential equations (or discrete-time difference equations) that can be written as

(9)#\[ \begin{align*} \ddt\qhat(t) = \sum_{\ell=1}^{n_\textrm{terms}} \Ophat_{\ell}(\qhat(t),\u(t)) \end{align*} \]

where each \(\Ophat_{\ell}:\RR^{r}\times\RR^{m}\to\RR^{r}\) is a vector-valued function that is polynomial with respect to the reduced state \(\qhat\in\RR^{n}\) and the input \(\u\in\RR^{m}\). Such functions, which we refer to as operators, can be represented by a matrix-vector product

\[ \begin{align*} \Ophat_{\ell}(\qhat,\u) = \Ohat_{\ell}\d_{\ell}(\qhat,\u) \end{align*} \]

for some data-dependent vector \(\d_{\ell}:\RR^{r}\times\RR^{m}\to\RR^{d}\) and constant matrix \(\Ohat_{\ell}\in\RR^{r\times d}\). The goal of Operator Inference is to learn—using only data—the operator entries \(\Ohat_\ell\) for each operator in the reduced-order model.

The classes in this module represent different types of operators that can used in defining the structure of an Operator Inference reduced-order model.

Example

To construct a linear time-invariant (LTI) system

(10)#\[ \begin{align} \ddt\qhat(t) = \Ahat\qhat(t) + \Bhat\u(t), \qquad \Ahat\in\RR^{r \times r}, ~ \Bhat\in\RR^{r \times m}, \end{align} \]

we use the following operator classes.

Class

Definition

Operator entries

data vector

LinearOperator

\(\Ophat_{1}(\qhat,\u) = \Ahat\qhat\)

\(\Ohat_{1} = \Ahat \in \RR^{r\times r}\)

\(\d_{1}(\qhat,\u) = \qhat\in\RR^{r}\)

InputOperator

\(\Ophat_{2}(\qhat,\u) = \Bhat\u\)

\(\Ohat_{2} = \Bhat \in \RR^{r\times m}\)

\(\d_{2}(\qhat,\u) = \u\in\RR^{m}\)

An opinf.models.ContinuousModel object can be instantiated with a list of operators objects to represent (10) as

\[ \begin{align*} \ddt\qhat(t) = \Ophat_{1}(\qhat(t),\u(t)) + \Ophat_{2}(\qhat(t),\u(t)). \end{align*} \]
import opinf

LTI_model = opinf.models.ContinuousModel(
    operators=[
        opinf.operators.LinearOperator(),
        opinf.operators.InputOperator(),
    ]
)

Nonparametric Operators#

A nonparametric operator \(\Ophat_{\ell}(\qhat,\u) = \Ohat_{\ell}\d_{\ell}(\qhat,\u)\) is one where the entries matrix \(\Ohat_\ell\) is constant.

ConstantOperator

Constant operator \(\Ophat_{\ell}(\qhat,\u) = \chat \in \RR^{r}\).

LinearOperator

Linear state operator \(\Ophat_{\ell}(\qhat,\u) = \Ahat\qhat\) where \(\Ahat \in \RR^{r \times r}\).

QuadraticOperator

Quadratic state operator \(\Ophat_{\ell}(\qhat,\u) = \Hhat[\qhat\otimes\qhat]\) where \(\Hhat\in\RR^{r \times r^{2}}\).

CubicOperator

Cubic state operator \(\Ophat_{\ell}(\qhat,\u) = \Ghat[\qhat\otimes\qhat\otimes\qhat]\) where \(\Ghat\in\RR^{r \times r^{3}}\).

InputOperator

Linear input operator \(\Ophat_{\ell}(\qhat,\u) = \Bhat\u\) where \(\Bhat \in \RR^{r \times m}\).

StateInputOperator

Linear state / input interaction operator \(\Ophat_{\ell}(\qhat,\u) = \Nhat[\u\otimes\qhat]\) where \(\Nhat \in \RR^{r \times rm}\).

Nonparametric operators can be instantiated without arguments. If the operator entries are known, they can be passed into the constructor or set later with the set_entries() method. The entries are stored as the entries attribute and can be accessed with slicing operations [:].

In a reduced-order model, there are two ways to determine the operator entries:

  • Learn the entries from data (nonintrusive Operator Inference), or

  • Shrink an existing high-dimensional operator (intrusive Galerkin projection).

Once the entries are set, the following methods are used to compute the action of the operator or its derivatives.

  • apply(): compute the operator action \(\Ophat_\ell(\qhat, \u)\).

  • jacobian(): construct the state Jacobian \(\ddqhat\Ophat_\ell(\qhat, \u)\).

Learning Operators from Data#

Suppose we have state-input-derivative data triples \(\{(\qhat_j,\u_j,\dot{\qhat}_j)\}_{j=0}^{k-1}\) that approximately satisfy the model (9), i.e.,

(11)#\[ \begin{align*} \dot{\qhat}_j \approx \Ophat(\qhat_j, \u_j) = \sum_{\ell=1}^{n_\textrm{terms}} \Ophat_{\ell}(\qhat_j, \u_j) = \sum_{\ell=1}^{n_\textrm{terms}} \Ohat_{\ell}\d_{\ell}(\qhat_j, \u_j). \end{align*} \]

Operator Inference determines the operator entries \(\Ohat_1,\ldots,\Ohat_{n_\textrm{terms}}\) by minimizing the residual of (11):

\[ \begin{align*} \min_{\Ohat_1,\ldots,\Ohat_{n_\textrm{terms}}}\sum_{j=0}^{k-1}\left\| \sum_{\ell=1}^{n_\textrm{terms}}\Ohat_\ell\d_\ell(\qhat_j,\u_j) - \dot{\qhat}_j \right\|_2^2. \end{align*} \]

To facilitate this, nonparametric operator classes have a static datablock() method that, given the state-input data pairs \(\{(\qhat_j,\u_j)\}_{j=0}^{k-1}\), forms the matrix

\[\begin{split} \begin{align*} \D_{\ell}\trp = \left[\begin{array}{c|c|c|c} & & & \\ \d_{\ell}(\qhat_0,\u_0) & \d_{\ell}(\qhat_1,\u_1) & \cdots & \d_{\ell}(\qhat_{k-1},\u_{k-1}) \\ & & & \end{array}\right] \in \RR^{d \times k}. \end{align*} \end{split}\]

Then (11) can be written in the linear least-squares form

\[ \begin{align*} \min_{\Ohat_1,\ldots,\Ohat_{n_\textrm{terms}}}\sum_{j=0}^{k-1}\left\| \sum_{\ell=1}^{n_\textrm{terms}}\Ohat_\ell\d_\ell(\qhat_j,\u_j) - \dot{\qhat}_j \right\|_2^2 = \min_{\Ohat}\left\| \D\Ohat\trp - [~\dot{\qhat}_0~~\cdots~~\dot{\qhat}_{k-1}~]\trp \right\|_F^2, \end{align*} \]

where the complete operator matrix \(\Ohat\) and data matrix \(\D\) are concatenations of the operator and data matrices from each operator:

\[\begin{split} \begin{align*} \Ohat = \left[\begin{array}{ccc} & & \\ \Ohat_1 & \cdots & \Ohat_{n_\textrm{terms}} \\ & & \end{array}\right], \qquad \D = \left[\begin{array}{ccc} & & \\ \D_1 & \cdots & \D_{n_\textrm{terms}} \\ & & \end{array}\right]. \end{align*} \end{split}\]

Model classes from opinf.models are instantiated with a list of operators. The model’s fit() method calls the datablock() method of each operator to assemble the full data matrix \(\D\), solves the regression problem for the full data matrix \(\Ohat\) (see opinf.lstsq), and sets the entries of the \(\ell\)-th operator to \(\Ohat_{\ell}\).

Example

For the LTI system (10), the operator inference problem is the following regression.

\[ \begin{align*} \min_{\Ahat,\Bhat}\sum_{j=0}^{k-1}\left\| \Ahat\qhat_j + \Bhat\u_j - \dot{\qhat}_j \right\|_2^2 = \min_{\Ohat}\left\| \D\Ohat\trp - [~\dot{\qhat}_0~~\cdots~~\dot{\qhat}_{k-1}~]\trp \right\|_F^2, \end{align*} \]

with operator matrix \(\Ohat=[~\Ahat~~\Bhat~]\) and data matrix \(\D = [~\Qhat\trp~~\U\trp~]\) where \(\Qhat = [~\qhat_0~~\cdots~~\qhat_{k-1}~]\) and \(\U = [~\u_0~~\cdots~~\u_{k-1}~]\).

Important

Only operators whose entries are not initialized (set to None) when a model is constructed are learned with Operator Inference when fit() is called. For example, suppose for the LTI system (10) an appropriate input matrix \(\Bhat\) is known and stored as the variable B_.

import opinf

LTI_model = opinf.models.ContinuousModel(
    operators=[
        opinf.operators.LinearOperator(),   # No entries specified.
        opinf.operators.InputOperator(B_),  # Entries set to B_.
    ]
)

In this case, LIT_model.fit() only determines the entries of the LinearOperator object using Operator Inference, with regression problem

\[\begin{split} \begin{align*} &\min_{\Ahat,}\sum_{j=0}^{k-1}\left\| \Ahat\qhat_j - (\dot{\qhat}_j - \Bhat\u_j) \right\|_2^2 \\ &= \min_{\Ohat}\left\| \Qhat\trp\Ahat\trp - [~(\dot{\qhat}_0 - \Bhat\u_0)~~\cdots~~(\dot{\qhat}_{k-1} - \Bhat\u_{k-1})~]\trp \right\|_F^2. \end{align*} \end{split}\]

Learning Operators via Projection#

The goal of Operator Inference is to learn operator entries from data because full-order operators are unknown or computationally inaccessible. However, in some scenarios a subset of the full-order model operators are known, in which case the corresponding reduced-order model operators can be determined through intrusive projection. Consider a full-order operator \(\Op:\RR^{n}\times\RR^{m}\to\RR^{n}\), written \(\Op(\q,\u)\), where

  • \(\q\in\RR^n\) is the full-order state, and

  • \(\u\in\RR^m\) is the input.

Given a trial basis \(\Vr\in\RR^{n\times r}\) and a test basis \(\Wr\in\RR^{n\times r}\), the corresponding intrusive projection of \(\Op\) is the operator \(\Ophat:\RR^{r}\times\RR^{m}\to\RR^{r}\) defined by

\[ \begin{align*} \Ophat(\qhat, \u) = \Wr\trp\Op(\Vr\qhat, \u) \end{align*} \]

where

  • \(\qhat\in\RR^{r}\) is the reduced-order state, and

  • \(\u\in\RR^{m}\) is the input (the same as before).

This approach uses the low-dimensional state approximation \(\q = \Vr\qhat\). If \(\Wr = \Vr\), the result is called a Galerkin projection. If \(\Wr \neq \Vr\), it is called a Petrov-Galerkin projection.

Example

Consider the bilinear operator \(\Op(\q,\u) = \N[\u\otimes\q]\) where \(\N\in\RR^{n \times nm}\). The intrusive Galerkin projection of \(\Op\) is the bilinear operator

\[ \begin{align*} \Ophat(\qhat,\u) = \Wr\trp\N[\u\otimes\Vr\qhat] = \Nhat[\u\otimes\qhat] \end{align*} \]

where \(\Nhat = \Wr\trp\N(\I_m\otimes\Vr) \in \RR^{r\times rm}\).

Every operator class has a galerkin() method that performs intrusive projection.

Parametric Operators#

Operators are called parametric if the operator entries depend on an independent parameter vector \(\bfmu\in\RR^{p}\), i.e., \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \Ohat_{\ell}(\bfmu)\d_{\ell}(\qhat,\u)\) where now \(\Ohat:\RR^{p}\to\RR^{r\times d}\).

Example

Let \(\bfmu = [~\mu_{1}~~\mu_{2}~]\trp\). The linear operator \(\Ophat_1(\qhat,\u;\bfmu) = (\mu_{1}\Ahat_{1} + \mu_{2}\Ahat_{2})\qhat\) is a parametric operator with parameter-dependent entries \(\Ohat_1(\bfmu) = \mu_{1}\Ahat_{1} + \mu_{2}\Ahat_{2}\).

Interpolated Operators#

These operators handle the parametric dependence on \(\bfmu\) by using elementwise interpolation:

\[ \begin{align*} \Ohat_{\ell}(\bfmu) = \text{interpolate}( (\bfmu_{1},\Ohat_{\ell}^{(1)}),\ldots,(\bfmu_{s},\Ohat_{\ell}^{(s)}); \bfmu), \end{align*} \]

where \(\bfmu_1,\ldots,\bfmu_s\) are training parameter values and \(\Ohat_{\ell}^{(i)} = \Ohat_{\ell}(\bfmu_i)\) for \(i=1,\ldots,s\).

InterpolatedConstantOperator

Parametric constant operator \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \chat(\bfmu) \in \RR^r\) where the parametric dependence is handled with elementwise interpolation.

InterpolatedLinearOperator

Parametric linear operator \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \Ahat(\bfmu)\qhat\) where \(\Ahat(\bfmu) \in \RR^{r \times r}\) and the parametric dependence is handled with elementwise interpolation.

InterpolatedQuadraticOperator

Parametric quadratic operator \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \Hhat(\bfmu)[\qhat\otimes\qhat]\) where \(\Ahat(\bfmu) \in \RR^{r \times r^2}\) and the parametric dependence is handled with elementwise interpolation.

InterpolatedCubicOperator

Parametric cubic operator \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \Ghat(\bfmu)[\qhat\otimes\qhat\otimes\qhat]\) where \(\Ghat(\bfmu) \in \RR^{r \times r^3}\) and the parametric dependence is handled with elementwise interpolation.

InterpolatedInputOperator

Parametric input operator \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \Bhat(\bfmu)\u\) where \(\Bhat(\bfmu) \in \RR^{r \times m}\) and the parametric dependence is handled with elementwise interpolation.

InterpolatedStateInputOperator

Parametric state-input operator \(\Ophat_{\ell}(\qhat,\u;\bfmu) = \Nhat(\bfmu)[\u\otimes\qhat]\) where \(\Nhat(\bfmu) \in \RR^{r \times rm}\) and the parametric dependence is handled with elementwise interpolation.

Utilities#

is_nonparametric

Return True if obj is a nonparametric operator object.

is_parametric

Return True if obj is a parametric operator object.

has_inputs

Return True if obj is an operator object whose apply() method acts on the input_ argument, i.e., \(\Ophat_{\ell}(\qhat,\u)\) depends on \(\u\).