Learning emergent partial differential equations in a learned emergent space
Previous Work in our lab uses nonlinear dimensionality reduction techniques to embed the states of a high-dimensional dynamical system in a submanifold of lower dimension, providing insight on the true coarse dynamics of the system.
The technique treats each state in a simulation or measured timeseries (or corpus of such timseries) as a high-dimensional point. The resulting cloud of points is then amenable to various dimensionality reduction techniques--we choose to use diffusion maps for its ability to uncover nonlinear embedded manifolds.
This new project uses datasets of a similar form--trajectory matrices. However, instead of using rows of the matrix as high-dimensional points, we use columns or patches of the matrix. This technique is applicable to simulatios of many coupled agents (such as typical neural simulations), or method-of-lines discretized partial differential equations. Rows of the trajectory matrices are indexed by timestep, whlie columns are indexed by agent ID or PDE discretization cell.
In the simpler case where columns are used, this method for processing a trajectory uncovers thie underlying continuum of types of dynamics. For PDEs in space and time, this continuum is often isomorphic to the underlying physical space. For ODEs describing the behavior of a population of agents (such as neurons), we uncover an effective space for the agents, which may include (1) physical space, (2) heterogeneous parameters, (3) initial conditions, or (4) steady states.
This project was conceived in support of my primary thesis topic, in which heterogeneous parameters for a population of agents are used as abscissae for describing system state as a sum of smooth basis functions. (This is analogous to the use of Fourier expansions in physical space for the solution of e.g. underground reservoir simulation.) However, when simulations involve many heterogeneous parameters, or when trajectory matrices represent real-world experimental data, it is not always clear what the underlying heterogneous parameters are. Hence, the equal space technique aims to extract the relevant heterogeneities directly from sample trajectories.
Global and local reduced models for interacting, heterogeneous agents
Large collections of coupled, heterogeneous agents can manifest complex dynamical behavior presenting difficulties for simulation and analysis. However, if the collective dynamics lie on a low-dimensional manifold, then the original agent-based model may be approximated with a simplified surrogate model on and near the low-dimensional space where the dynamics live. Analytically identifying such simplified models can be challenging or impossible, but here we present a data-driven coarse-graining methodology for discovering such reduced models.
We consider two types of reduced models: globally based models that use global information and predict dynamics using information from the whole ensemble and locally based models that use local information, that is, information from just a subset of agents close (close in heterogeneity space, not physical space) to an agent, to predict the dynamics of an agent. For both approaches, we are able to learn laws governing the behavior of the reduced system on the low-dimensional manifold directly from time series of states from the agent-based system. These laws take the form of either a system of ordinary differential equations (ODEs), for the globally based approach, or a partial differential equation (PDE) in the locally based case.
For each technique, we employ a specialized artificial neural network integrator that has been templated on an Euler time stepper (i.e., a ResNet) to learn the laws of the reduced model. As part of our methodology, we utilize the proper orthogonal decomposition (POD) to identify the low-dimensional space of the dynamics. Our globally based technique uses the resulting POD basis to define a set of coordinates for the agent states in this space and then seeks to learn the time evolution of these coordinates as a system of ODEs.
For the locally based technique, we propose a methodology for learning a partial differential equation representation of the agents; the PDE law depends on the state variables and partial derivatives of the state variables with respect to model heterogeneities. We require that the state variables are smooth with respect to model heterogeneities, which permit us to cast the discrete agent-based problem as a continuous one in heterogeneity space. The agents in such a representation bear similarity to the discretization points used in typical finite element/volume methods.
As an illustration of the efficacy of our techniques, we consider a simplified coupled neuron model for rhythmic oscillations in the pre-Btzinger complex and demonstrate how our data-driven surrogate models are able to produce dynamics comparable to the dynamics of the full system. A nontrivial conclusion is that the dynamics can be equally well reproduced by an all-to-all coupled and by a locally coupled model of the same agents.
An Emergent Space for Distributed Data with Hidden Internal Order through Manifold Learning
Felix Kemeth, Sindre Haugland, Felix Dietrich, Tom Bertalan, Kevin Hhlein, Qianxiao Li, Erik M. Bollt, Ronen Talmon, Katharina Krischer, and Ioannis Kevrekidis. IEEE Access 6 (2018). DOI: 10.1109/ACCESS.2018.2882777
Manifold-learning techniques are routinely used in mining complex spatiotemporal data to extract useful, parsimonious data representations/parametrizations; these are, in turn, useful in nonlinear model identification tasks. We focus here on the case of time series data that can ultimately be modeled as a spatially distributed system [e.g., a partial differential equation (PDE)], but where we do not know the space in which this PDE should be formulated. Hence, even the spatial coordinates for the distributed system themselves need to be identified-to "emerge from"-the data mining process. We will first validate this "emergent space" reconstruction for time series sampled without space labels in known PDEs; this brings up the issue of observability of physical space from temporal observation data and the transition from spatially resolved to lumped (order-parameter-based) representations by tuning the scale of the data mining kernels. We will then present actual emergent space "discovery" illustrations. Our illustrative examples include chimera states (states of coexisting coherent and incoherent dynamics), and chaotic as well as quasiperiodic spatiotemporal dynamics, arising in partial differential equations and/or in heterogeneous networks. We also discuss how data-driven "spatial" coordinates can be extracted in ways invariant to the nature of the measuring instrument. Such gauge-invariant data mining can go beyond the fusion of heterogeneous observations of the same system, to the possible matching of apparently different systems.