Sparsity-Promoting Dynamic Mode Decomposition

In this tutorial we present a variant of DMD called Sparsity-Promoting DMD. The algorithm differs from the original one in the way it computes DMD modes amplitudes, since SpDMD "promotes" combinations of amplitudes with an high number of amplitudes set to 0. Such a quality of DMD amplitudes (and of matrix in general) is called sparsity. The parameter $\gamma$ controls how much sparsity is promoted.

SpDMD has been presented in Sparsity-promoting dynamic mode decomposition, (https://doi.org/10.1063/1.4863670). The algorithm uses ADMM (see Distributed optimization and statistical learning via the alternating direction method of multipliers, http://dx.doi.org/10.1561/2200000016), an iterative algorithm used to solve optimization problems.

First of all, we import the modules needed for this tutorial.

We consider several values of the parameter $\gamma$ in order to check how it affects the result:

As $\gamma \to 0$, SpDMD approaches to standard DMD algorithm, which does not impose sparsity in the result.

We now load a dataset related to an heat conductivity problem:

We use the dataset to train several instances of SpDMD (one for each $\gamma$ taken into account). We also create an instance of standard DMD for comparison:

As you can see each call to the method SpDMD.fit() prints the number of iterations of ADMM. You can suppress this output passing the flag verbose=False to the constructor of the class SpDMD:

You can control the number of iterations with the parameter rho of the constructor of the class SpDMD. The optimal value for this parameter may differ for different problems (reference: https://doi.org/10.3182/20120914-2-US-4030.00038).

The maximum number of iterations of ADMM is $10^4$ by default, and can be tuned with the parameter max_iterations of the constructor of the class SpDMD.

Sparsity-promoting amplitudes

We now examine the results of the experiment initialized above. First of all we plot the number of non-zero amplitudes in the algorithm as $\gamma$ increases. As you can see an high value of $\gamma$ "breaks" the algorithm, in the sense that all the DMD amplitudes are set to 0. By contrast, for low values of $\gamma$ SpDMD converges to standard DMD (all amplitudes are non-zero).

In order to visualize the situation we plot the DMD ampltiudes for the tested values of $\gamma$.

Reconstruction and pointwise error

We use the instances of SpDMD and DMD constructed above to reconstruct the dataset for a particular time instance. Then, we evaluate the pointwise error for that time instance, and we compare the results for different values of $\gamma$.

As you can see the errors provided by SpDMD and standard DMD are on similar magnitudes. The quality decreases as $\gamma$ grows, with the worst case happening when the number of non-zero amplitudes becomes 0.

Comparison of the time evolution of absolute error

To end the comparison, we plot alongside the evolution of the absolute error (see the code below for our implementation of the function absolute_error) for the tested values of $\gamma$.