External Inputs#

The fundamental goal of model reduction is to efficiently make physics-based predictions. Given synthetic or experimental data that was generated or collected under a certain set of conditions, we aim to construct a cost-effective model that produces accurate solutions under new sets of conditions. The first tutorial showed an example of evaluating a reduced-order model (ROM) for various initial conditions. This tutorial focuses on problems with external time-dependent inputs.

Problem Statement#

We consider a problem with external inputs that are parameterized by a scalar-valued function \(u:\RR\to\RR.\)

Governing Equations

Let \(\Omega = [0,L]\subset \mathbb{R}\) be the spatial domain indicated by the variable \(x\), and let \([0,T]\subset\mathbb{R}\) be the time domain with variable \(t\). We consider the one-dimensional heat equation with time-dependent Dirichlet boundary conditions,

\[\begin{split} \begin{aligned} &\frac{\partial}{\partial t} q(x,t) = \frac{\partial^2}{\partial x^2}q(x,t) & x &\in\Omega,\quad t\in[0,T], \\ &q(0,t) = q(L,t) = u(t) & t &\in[0,T], \\ &q(x,0) = \big(e^{\alpha(x - 1)} + e^{-\alpha x} - e^{-\alpha}\big)u(0) & x &\in \Omega, \end{aligned} \end{split}\]

where \(\alpha>0\) is constant and \(q(x,t)\) is the unknown state variable. This is a model for a one-dimensional rod conducting heat with a fixed initial heat profile. The temperature at the ends of the rod are governed by the input function \(u(t)\), but heat is allowed to diffuse through the rod and flow out at the ends of the domain.

Objective

Construct a reduced-order model (ROM) which can be solved rapidly to produce approximate solutions \(q(x, t)\) to the partial differential equation given above for various choices of the input function \(u(t)\). In addition, we will only observe data over a limited time interval \(t \in [0, T']\) with \(T' < T\), then use the ROM to predict the solution for the entire time domain \([0, T]\). Hence, the ROM will be predictive in time and predictive in the inputs.

import numpy as np
import scipy.sparse
import matplotlib.pyplot as plt

import opinf

opinf.utils.mpl_config()

Single Training Trajectory#

In this section a ROM is trained using data collected for a single choice of the input function \(u(t).\)

Full-order Model Definition#

As in the last tutorial, we use a centered finite difference approximation for the spatial derivative to arrive at a system of \(n\) ordinary differential equations. This time, due to the nonzero boundary conditions, the system takes the form

(8)#\[ \begin{aligned} \ddt\q(t) = \A\q(t) + \B u(t), \qquad \q(0) = \q_0, \end{aligned} \]

where \(\q:\RR\to\RR^n\), \(\A\in\RR^{n\times n}\), and \(\B\in\RR^{n}\). The system (8) is the full-order model (FOM), which we will use to generate training data for the time domain \([0, T'] \subset [0, T]\).

Discretization details

We take an equidistant grid \(\{x_i\}_{i=0}^{n+1} \subset \Omega\),

\[ \begin{aligned} 0 &= x_0 < x_1 < \cdots < x_n < x_{n+1} = L & &\text{and} & \delta x &= \frac{L}{n+1} = x_{i+1} - x_{i},\quad i=1,\ldots,n-1. \end{aligned} \]

The boundary conditions prescribe \(q(x_0,t) = q(x_{n+1},t) = u(t)\). Our goal is to compute \(q(x,t)\) at the interior spatial points \(x_{1},x_{2},\ldots,x_{n}\) for various \(t\in[0,T]\), so we consider the state vector \(\q(t) = [~q(x_{1}, t)~~\cdots~~q(x_{n}, t)~]\trp\in\RR^n\) and derive a system governing the evolution of \(\q(t)\) in time.

Approximating the spatial derivative with a central finite difference approximation,

\[ \begin{aligned} \frac{\partial^2}{\partial x^2}q(x,t) \approx \frac{q(x-\delta x,t) - 2q(x,t) + q(x+\delta x,t)}{(\delta x)^2}, \end{aligned} \]

and using the boundary conditions \(q(0,t) = q(L,t) = u(t)\), we arrive at the following matrices for the FOM.

\[\begin{split} \begin{aligned} \A(\mu) &= \frac{1}{(\delta x)^2}\left[\begin{array}{ccccc} -2 & 1 & & & \\ 1 & -2 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & -2 & 1 \\ & & & 1 & -2 \\ \end{array}\right] \in\RR^{n\times n}, & \B(\mu) &= \frac{1}{(\delta x)^2}\left[\begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \\ 1 \end{array}\right]\in\RR^{n}. \end{aligned} \end{split}\]

Training Data Generation#

Let \(L = 1\), \(T = 1\), and set \(\alpha = 100\). We begin by solving the FOM described above, recording the solution every \(\delta t = 10^{-3}\) time units for a single choice of the input function \(u(t)\), yielding \(10^3 + 1 = 1001\) total time steps (1000 steps past the initial condition). We will assume that we can only observe the first \(k = 200\) time steps and use the ROM to predict the remaining \(801\) steps. Our training input function is

\[ \begin{aligned} u_\text{train}(t) = 1 + \frac{1}{4}\sin(4\pi t). \end{aligned} \]
def training_input(tt):
    return np.ones_like(tt) + np.sin(4 * np.pi * tt) / 4
Hide code cell source
# Construct the spatial domain.
L = 1
n = 2**10 - 1
x_all = np.linspace(0, L, n + 2)
x = x_all[1:-1]
dx = x[1] - x[0]

# Construct the temporal domain.
T = 1
K = 10**3 + 1
t_all = np.linspace(0, T, K)
dt = t_all[1] - t_all[0]

# Construct the full-order state matrix A.
dx2inv = 1 / dx**2
diags = np.array([1, -2, 1]) * dx2inv
A = scipy.sparse.diags(diags, [-1, 0, 1], (n, n))

# Construct the full-order input matrix B.
B = np.zeros_like(x)
B[0], B[-1] = dx2inv, dx2inv

# Define the full-order model with an opinf.models class.
fom = opinf.models.ContinuousModel(
    operators=[
        opinf.operators.LinearOperator(A),
        opinf.operators.InputOperator(B),
    ]
)

# Construct the part of the initial condition not dependent on u(t).
alpha = 100
q0 = np.exp(alpha * (x - 1)) + np.exp(-alpha * x) - np.exp(-alpha)


def full_order_solve(time_domain, u):
    """Solve the full-order model with SciPy.
    Here, u is a callable function.
    """
    return fom.predict(q0 * u(0), time_domain, u, method="BDF")


# Solve the full-order model with the training input.
with opinf.utils.TimedBlock("Full-order solve"):
    Q_all = full_order_solve(t_all, training_input)

# Retain only the first k snapshots/inputs for training the ROM.
k = 200
t = t_all[:k]
Q = Q_all[:, :k]

print(f"\nSpatial domain:\t\t{x.shape=}")
print(f"Spatial step size:\t{dx=:.10f}")
print(f"\nFull time domain:\t{t_all.shape=}")
print(f"Training time domain:\t{t.shape=}")
print(f"Temporal step size:\t{dt=:f}")
print(f"\nFull-order matrix A:\t{A.shape=}")
print(f"Full-order vector B:\t{B.shape=}")
print(f"\nInitial condition:\t{q0.shape=}")
print(f"\nAll FOM solutions:\t{Q_all.shape=}")
print(f"Training snapshots:\t{Q.shape=}")
Full-order solve...done in 0.29 s.
Spatial domain:		x.shape=(1023,)
Spatial step size:	dx=0.0009765625

Full time domain:	t_all.shape=(1001,)
Training time domain:	t.shape=(200,)
Temporal step size:	dt=0.001000

Full-order matrix A:	A.shape=(1023, 1023)
Full-order vector B:	B.shape=(1023,)

Initial condition:	q0.shape=(1023,)

All FOM solutions:	Q_all.shape=(1023, 1001)
Training snapshots:	Q.shape=(1023, 200)

The following code visualizes the training data and the full FOM solution set by plotting a few snapshots over the spatial domain and the time evolution of the snapshots at a few spatial locations.

Hide code cell source
def plot_data_space(Z, u, title, ax=None):
    """Plot state data over space at multiple instances in time."""
    if ax is None:
        _, ax = plt.subplots(1, 1)

    # Plot a few snapshots over the spatial domain.
    sample_columns = [0] + [2**d for d in range(10)]
    color = iter(plt.cm.viridis_r(np.linspace(0.05, 1, len(sample_columns))))
    while sample_columns[-1] > Z.shape[1] - 1:
        sample_columns = sample_columns[:-1]
    for j in sample_columns:
        leftBC, rightBC = [u(t_all[j])], [u(t_all[j])]
        q_all = np.concatenate([leftBC, Z[:, j], rightBC])
        c = next(color)
        ax.plot(x_all, q_all, lw=1, color=c, label=rf"$q(x,t_{{{j}}})$")

    ax.set_xlim(x_all[0], x_all[-1])
    ax.set_xlabel(r"$x$")
    ax.set_ylabel(r"$q(x,t)$")
    ax.legend(loc=(1.05, 0.05))
    ax.set_title(title)


def plot_data_time(Z, title, ax=None):
    """Plot state in time at multiple spatial locations."""
    if ax is None:
        _, ax = plt.subplots(1, 1)

    # Plot a few snapshots over the spatial domain.
    sample_rows = np.linspace(0, Z.shape[0] - 1, 11)
    sample_rows = sample_rows[:-1] + (sample_rows[1] - sample_rows[0]) / 4
    sample_rows = sample_rows.astype(int)
    color = iter(plt.cm.inferno(np.linspace(0, 0.8, len(sample_rows))))
    tt = t_all[: Z.shape[1]]
    for i in sample_rows:
        ax.plot(tt, Z[i], lw=1, color=next(color), label=rf"$q(x_{{{i}}},t)$")

    ax.set_xlim(t_all[0], t_all[-1])
    ax.set_xlabel(r"$t$")
    ax.set_ylabel(r"$q(x,t)$")
    ax.legend(loc=(1.05, 0.05))
    ax.set_title(title)


def plot_two_datasets(Z1, Z2, u, title1="", title2="", cutoff=None):
    """Plot two datasets side by side with space and time plots."""
    _, [ax1, ax2] = plt.subplots(1, 2, sharex=True, sharey=True)
    plot_data_space(Z1, u, title1, ax1)
    plot_data_space(Z2, u, title2, ax2)
    ax1.legend([])

    fig, [ax1, ax2] = plt.subplots(2, 1, sharex=True, sharey=True)
    plot_data_time(Z1, title1, ax1)
    plot_data_time(Z2, title2, ax2)
    ax1.legend([])
    ax1.set_xlabel("")
    fig.subplots_adjust(hspace=0.3)
    if cutoff is not None:
        ax1.axvline(cutoff, color="gray", linewidth=1, linestyle="--")
        ax1.text(cutoff - 10 * dt, 0, "training", ha="right", color="gray")
        ax1.text(cutoff + 10 * dt, 0, "prediction", ha="left", color="gray")

    plt.show()
plot_two_datasets(
    Q,
    Q_all,
    training_input,
    "Snapshot data for training",
    "Full-order model solution",
    cutoff=t[-1],
)
../../_images/ba3685b972908145f4a7335f48cc1b64432ca13257aab961679e439a6195e7e0.png ../../_images/4d8002beebc1787b01a9876ab9e1bc22cd26fd1a3e2b259110982b10826be2a9.png

ROM Construction#

We now have snapshot data \(\Q \in \RR^{n \times k}\), but to learn a model with external inputs, we need training data for the inputs as well as for the snapshots. Define the vector

\[ \begin{aligned} \U = \left[\begin{array}{cccc} u_\text{train}(t_0) & u_\text{train}(t_1) & \cdots & u_\text{train}(t_{k-1}) \end{array}\right] \in\RR^{k}, \end{aligned} \]

which collects the values of the training input function at the same times as the training snapshots.

U = training_input(t)

print(f"Training snapshots:\t{Q.shape=}")
print(f"Training inputs:\t{U.shape=}")
Training snapshots:	Q.shape=(1023, 200)
Training inputs:	U.shape=(200,)

We will use a opinf.basis.PODBasis to reduce the dimension of the snapshot training data, which approximates the discretized state vector as \(\q(t) \approx \Vr\qhat(t)\) for some \(\Vr\in\RR^{n\times r}\) with orthonormal columns and \(\qhat(t)\in\RR^{r}\), with and \(r\ll n\). Input training data are not typically compressed with dimensionality reduction or subjected to other pre-processing routines. Because the FOM (8) has the linear-time invariant form \(\ddt\q(t) = \A\q(t) + \B u(t)\), we seek a ROM with the same structure, i.e.,

\[ \begin{aligned} \ddt\qhat(t) = \Ahat\qhat(t) + \Bhat u(t), \qquad \qhat(0) = \Vr\trp\q_0. \end{aligned} \]

Data for the time derivative \(\ddt\qhat(t)\) are estimated in this example with sixth-order finite differences using opinf.ddt.UniformFiniteDifferencer. The underlying least-squares problem to determine \(\Ahat\) and \(\Bhat\) is given by

\[ \begin{aligned} \min_{\Ahat,\Bhat} \sum_{j=0}^{k-1}\left\| \Ahat\qhat_{j} + \Bhat\u_j - \dot{\qhat}_j \right\|_{2}^{2}, \end{aligned} \]

where \(\qhat_j = \qhat(t_j)\in\RR^{r}\) and \(u_j = u(t_j)\in\RR\) are the state snapshots and input data, respectively, and \(\dot{\qhat}_j \approx \ddt\qhat(t)|_{t=t_j}\in\RR^{r}\) are the estimated time derivatives.

Why Use the Same Structure?

An OpInf ROM should have the same structure as an intrusive Galerkin ROM. The Galerkin ROM for (8) is derived by substituting in the approximation \(\q(t)\approx\Vr\qhat(t)\), yielding

\[ \begin{aligned} \ddt\Vr\qhat(t) = \A\Vr\qhat(t) + \B u(t), \qquad \Vr\qhat(0) = \q_0. \end{aligned} \]

Next, left multiply by \(\Vr\trp\) and use the fact that \(\Vr\trp\Vr = \I\) to get the following:

\[ \begin{aligned} \ddt\qhat(t) = \tilde{\A}\qhat(t) + \tilde{\B}u(t), \qquad \qhat(0) = \Vr\trp\q_0, \end{aligned} \]

where \(\tilde{\A} = \Vr\trp\A\Vr \in \RR^{r\times r}\) and \(\tilde{\B} = \Vr\trp\B\in\RR^{r}\). Note that this ROM has the same input function \(u(t)\) as the FOM.

Training input data are passed to opinf.roms.ROM.fit() as the inputs argument.

rom = opinf.ROM(
    basis=opinf.basis.PODBasis(residual_energy=1e-6),
    ddt_estimator=opinf.ddt.UniformFiniteDifferencer(t, "ord6"),
    model=opinf.models.ContinuousModel("AB"),
)

with opinf.utils.TimedBlock("Fitting OpInf ROM"):
    rom.fit(Q, inputs=U)

with opinf.utils.TimedBlock("Reduced-order solve"):
    Q_ROM = rom.predict(q0, t_all, input_func=training_input, method="BDF")
Fitting OpInf ROM...done in 0.01 s.
Reduced-order solve...done in 0.01 s.
plot_two_datasets(
    Q_ROM,
    Q_all,
    training_input,
    "Reduced-order model solution",
    "Full-order model solution",
    cutoff=t[-1],
)
../../_images/5856e6ae52ca157742b21b1a12c22812c2ea12a4e6542a0c1fca1148bf5301cc.png ../../_images/f777413df92cfb85caf3cb1c76bcf112e62597b1240c33ed0d035367d70b9434.png

For a closer look at the difference between the FOM and ROM solutions, we compute the relative \(\ell_2\)-norm error of the ROM solution as a function of time using opinf.post.lp_error() and the relative Forbenius-norm error using opinf.post.frobenius_error().

Hide code cell source
def plot_errors_over_time(
    Ztrue, basis, Z1, label1, Z2=None, label2=None, cutoff=None
):
    """Plot normalized absolute projection error and ROM error(s)
    as a function of time.
    """
    _, ax = plt.subplots(1, 1)

    projection_err = opinf.post.lp_error(Ztrue, basis.project(Ztrue))[1]
    ax.semilogy(t_all, projection_err, "C3-", lw=1, label="Projection Error")

    relative_error = opinf.post.lp_error(Ztrue, Z1)[1]
    ax.semilogy(t_all, relative_error, "C0--", lw=1, label=label1)

    if Z2 is not None:
        relative_error = opinf.post.lp_error(Ztrue, Z2)[1]
        ax.semilogy(t_all, relative_error, "C5-.", lw=1, label=label2)

    if cutoff is not None:
        ax.axvline(cutoff, color="gray", linewidth=1, linestyle="--")
        ymin = projection_err.min() / 4
        ax.text(cutoff - 10 * dt, ymin, "training", ha="right", color="gray")
        ax.text(cutoff + 10 * dt, ymin, "prediction", ha="left", color="gray")
        ax.set_ylim(bottom=ymin / 2)

    ax.set_xlim(t_all[0], t_all[-1])
    ax.set_xlabel(r"$t$")
    ax.set_ylabel("Relative error")
    ax.legend(loc="lower right")
    plt.show()
plot_errors_over_time(Q_all, rom.basis, Q_ROM, "OpInf ROM error", cutoff=t[-1])
../../_images/0917eba054a7f038b2e57ab779313d3871e993cb1db34d27f91cc313b96e4d62.png
error_opinf = opinf.post.frobenius_error(Q_all, Q_ROM)[1]
print(f"OpInf ROM error:\t{error_opinf:.4e}")
OpInf ROM error:	8.8303e-04

Comparison to the Intrusive Galerkin ROM#

The classical intrusive Galerkin ROM for this problem is given by

\[ \begin{aligned} \ddt\qhat(t) = \tilde{\A}\qhat(t) + \tilde{\B}u(t), \qquad \qhat(0) = \Vr\trp\q_0, \end{aligned} \]

where \(\tilde{\A} = \Vr\trp\A\Vr \in \RR^{r\times r}\) and \(\tilde{\B} = \Vr\trp\B\in\RR^{r}\). Here, we form this ROM explicitly (using the same basis matrix \(\Vr\) as before) and compare it to our existing OpInf ROM.

rom_intrusive = opinf.ROM(
    basis=rom.basis,
    model=fom.galerkin(rom.basis.entries),  # Explicitly project FOM operators.
)
with opinf.utils.TimedBlock("Reduced-order model solve (intrusive)"):
    Q_ROM_intrusive = rom_intrusive.predict(
        q0, t_all, input_func=training_input, method="BDF"
    )
Reduced-order model solve (intrusive)...done in 0.01 s.
plot_errors_over_time(
    Q_all,
    rom.basis,
    Q_ROM,
    "OpInf ROM error",
    Q_ROM_intrusive,
    "Intrusive ROM error",
    cutoff=t[-1],
)
../../_images/052187f988bd3d42635642a0bfed71c0bf38e6cf3f066d7d263e8da4b80b6cad.png
error_intrusive = opinf.post.frobenius_error(Q_all, Q_ROM_intrusive)[1]
error_projection = rom.basis.projection_error(Q_all, relative=True)

print(
    "Relative Frobenius-norm errors",
    "-" * 33,
    f"Projection error:\t{error_projection:%}",
    f"OpInf ROM error:\t{error_opinf:%}",
    f"Intrusive ROM error:\t{error_intrusive:%}",
    sep="\n",
)
Relative Frobenius-norm errors
---------------------------------
Projection error:	0.019187%
OpInf ROM error:	0.088303%
Intrusive ROM error:	0.089150%

In this experiment, the OpInf ROM and the corresponding intrusive ROM have comparable error, even though the OpInf ROM is calibrated without intrusive access to the FOM.

Generalization to New Inputs#

The previous experiment uses a single choice of \(u(t)\) for the training and for the prediction in time. Now, we define a new choice of input function \(u(t)\),

\[ \begin{aligned} u_\text{test}(t) = 1 + t(1 - t), \end{aligned} \]

and evaluate the FOM and ROM for this new input.

def test_input(t):
    return 1 + t * (1 - t)
with opinf.utils.TimedBlock("Full-order solve"):
    Qtest_FOM = full_order_solve(t_all, test_input)

with opinf.utils.TimedBlock("Reduced-order solve (OpInf)"):
    Qtest_ROM = rom.predict(q0, t_all, test_input, method="BDF")

with opinf.utils.TimedBlock("Reduced-order solve (intrusive)"):
    Qtest_ROM_intrusive = rom_intrusive.predict(
        q0, t_all, test_input, method="BDF"
    )
Full-order solve...done in 0.27 s.
Reduced-order solve (OpInf)...done in 0.01 s.
Reduced-order solve (intrusive)...done in 0.01 s.
plot_two_datasets(
    Qtest_ROM,
    Qtest_FOM,
    test_input,
    "OpInf Reduced-order model solution",
    "Full-order model solution",
)
../../_images/c2a95cb5574868f1944818c5a54dbd0b89e130ab9f5621397c11c1f8bd0677a4.png ../../_images/ab6308ddec97a66ad7f12a0a84d64e114a0fcbf643fedfbde1f99abb7db7d97e.png
plot_errors_over_time(
    Qtest_FOM,
    rom.basis,
    Qtest_ROM,
    "OpInf ROM error",
    Qtest_ROM_intrusive,
    "Intrusive ROM error",
)
../../_images/45a52d4c59747e76d90122620d769d2bf80642128964e2fadb765cb8fa03513a.png

Both ROMs perform well with a new input function, but the intrusive ROM performs slightly better than the OpInf ROM. This is typical; intrusive ROMs are often more robust and generalizable than standard OpInf ROMs, but OpInf ROMs tend to reproduce training data better than intrusive ROMs.

Multiple Training Trajectories#

If data corresponding to several choices of the input function \(u(t)\) are available for training, we collect a list of snapshot matrices and a list of corresponding inputs to pass to fit().

Training Data Generation#

Below, we solve the PDE using the three input functions for training data:

\[ \begin{aligned} &u_\text{train}^{(1)}(t) = e^{-t}, &&& &u_\text{train}^{(2)}(t) = 1 + \frac{1}{2}t^2, &&& &u_\text{train}^{(3)}(t) = 1 - \frac{1}{2}\sin(\pi t). \end{aligned} \]

The following input functions are used for testing.

\[ \begin{aligned} &u_\text{test}^{(1)}(t) = 1 - \frac{1}{2}\sin(3\pi t), &&& &u_\text{test}^{(2)}(t) = 1 + 25 (t (t - 1))^3, &&& &u_\text{test}^{(3)}(t) = 1 + e^{-2t}\sin(\pi t). \end{aligned} \]
training_inputs = [
    lambda t: np.exp(-t),
    lambda t: 1 + t**2 / 2,
    lambda t: 1 - np.sin(np.pi * t) / 2,
]

testing_inputs = [
    lambda t: 1 - np.sin(3 * np.pi * t) / 3,
    lambda t: 1 + 25 * (t * (t - 1)) ** 3,
    lambda t: 1 + np.exp(-2 * t) * np.sin(np.pi * t),
]
Hide code cell source
# Visualize the input functions.
fig, [ax1, ax2] = plt.subplots(1, 2, sharex=True)
c = 0
for input_func in training_inputs:
    ax1.plot(t_all, input_func(t_all), color=f"C{c}", lw=1)
    c += 1
for input_func in testing_inputs:
    ax2.plot(t_all, input_func(t_all), color=f"C{c}", lw=1)
    c += 1

ax1.set_title("Training inputs")
ax2.set_title("Testing inputs")
# ax1.axvline(t[-1], color="k", lw=1)
ax1.axvline(t[-1], color="gray", linewidth=1, linestyle="--")
ax1.text(t[-1] - 10 * dt, 1.4, "training", ha="right", color="gray")
ax1.text(t[-1] + 10 * dt, 1.4, "prediction", ha="left", color="gray")
for ax in (ax1, ax2):
    ax.set_xlim(t_all[0], t_all[-1])
    ax.set_xlabel(r"$t$")
    ax.set_ylabel(r"$u(t)$")
plt.show()
../../_images/6100fcc4e565143d074c52a42ddad9fe47d5f861cd8eda33f87cb875eb8cc474.png
# Solve the full-order model for each training input and collect results.
Qs = []  # State snapshots.
Us = []  # Corresponding inputs.

for u in training_inputs:
    Qs.append(full_order_solve(t, u))
    Us.append(u(t))
rom = opinf.ROM(
    basis=opinf.basis.PODBasis(residual_energy=1e-6),
    ddt_estimator=opinf.ddt.UniformFiniteDifferencer(t, "ord6"),
    model=opinf.models.ContinuousModel("AB"),
)

with opinf.utils.TimedBlock("Fitting OpInf ROM"):
    rom.fit(Qs, inputs=Us)

rom_intrusive = opinf.ROM(
    basis=rom.basis,
    model=fom.galerkin(rom.basis.entries),
)
Fitting OpInf ROM...done in 0.05 s.
for i, u in enumerate(testing_inputs):
    print(f"Test input function {i+1:d}")

    with opinf.utils.TimedBlock("Full-order solve"):
        Q_FOM = full_order_solve(t_all, u)

    with opinf.utils.TimedBlock("Reduced-order solve (OpInf)"):
        Q_ROM = rom.predict(q0, t_all, u, method="BDF")

    with opinf.utils.TimedBlock("Reduced-order solve (intrusive)"):
        Q_ROM_intrusive = rom_intrusive.predict(q0, t_all, u, method="BDF")

    plot_two_datasets(
        Q_ROM,
        Q_FOM,
        u,
        "Reduced-order model solution (OpInf)",
        "Full-order model solution",
    )

    plot_errors_over_time(
        Q_FOM,
        rom.basis,
        Q_ROM,
        "OpInf ROM error",
        Q_ROM_intrusive,
        "Intrusive ROM error",
    )
Test input function 1
Full-order solve...done in 0.28 s.
Reduced-order solve (OpInf)...done in 0.01 s.
Reduced-order solve (intrusive)...done in 0.01 s.
../../_images/5aa198b08451cb11d82646aee7173746a1e0b5395741443c32b6ad1ab5c23686.png ../../_images/5df0eb687f83c191b2e493cb2b5219bbf98744db9a1f8aa5ba3309b73b976413.png ../../_images/c160392ba57bec89dddadf0cf3bcb70637c42e76adc8c84db580ac35cb5e3142.png
Test input function 2
Full-order solve...done in 0.28 s.
Reduced-order solve (OpInf)...done in 0.01 s.
Reduced-order solve (intrusive)...done in 0.01 s.
../../_images/177c3e7229f28e9a5892899aabe6a11f769357aff2ecf987de03fb3b01e16f16.png ../../_images/9a806546496426a857648e7d83f9e6abf1ad543eff7740a173a2f6dacb626cd4.png ../../_images/0afba4753aec194e79dccd81ab8802dfd8dcf7d21e5f1db0388b9623247e5faa.png
Test input function 3
Full-order solve...done in 0.28 s.
Reduced-order solve (OpInf)...done in 0.01 s.
Reduced-order solve (intrusive)...done in 0.01 s.
../../_images/3c8913b8e1a904e9bd228fe86f726e471446988304782201af3bd2df1844f8d8.png ../../_images/8d7ab0b3d68f8f7e037263a45477fe31c9c5ec392c446041e66bfa0756c5a9c3.png ../../_images/efbfb2ddcae30787fc045e865de8dd47b62332788b9a9c152a0bac4da8cf6a27.png

Multi-dimensional Inputs

The examples in this tutorial use a scalar-valued input function \(u:\RR\to\RR\). For models with vector inputs \(\u:\RR\to\RR^m\) with \(m > 1\), training inputs are collected into a matrix with \(m\) rows:

\[ \begin{aligned} \U = \left[\begin{array}{cccc} \u(t_0) & \u(t_1) & \cdots & \u(t_{k-1}) \end{array}\right] \in \RR^{m \times k}. \end{aligned} \]

This is the matrix used for the inputs argument of fit().