🔝​

Dynamical-Systems-Based Planning

Prerequisites

  • Basic knowledge of dynamical systems (DS)
  • Control theory, system stability
  • Diffeomorphic mapping

General Motivation

In trajectory planning problems, the robot’s objective is to generate smooth, stable, and goal-directed motions that can adapt to changes in the environment or task — beyond simply following a fixed path. This is where dynamical systems (DS) offer a powerful framework: instead of relying on time-parameterized trajectories, DS-based approaches define a continuous vector field that governs the robot’s motion toward a target.

While the idea of using DS may appear conceptually simple, it provides a flexible and reactive foundation for robot motion generation. A key advantage lies in its ability to generalize to different start positions, adapt online to perturbations, and naturally handle convergence, stability, and obstacle avoidance within a unified structure.

This is why, in recent years, dynamical system-based methods have gained prominence in robotic motion planning and control, particularly in scenarios requiring real-time adaptation. In industrial robotics, DS approaches have been successfully applied to tasks such as surface finishing, spraying, and assembly, where motion must adapt to variations in the environment. In physical human-robot interaction, DS frameworks also enable robots to generate compliant and predictable motions that respond continuously to human inputs — making shared control and learning from demonstration both efficient and intuitive.

Conceptual Exercise

Drag each task to the correct category:

Key features of DS-based Planning

Real-Time Adaptability
Goal Convergence
Reactive to perturbations
Open-loop execution
Requires full trajectory specification in advance
High reliance on precise timing

Course Content

Dynamical-Systems–Based Planning Overview

Motivation & Programming-by-Demonstration

Robotic path planning via dynamical systems learns continuous, time-invariant vector fields from human demonstrations (“Programming by Demonstration”) rather than hand-crafting trajectories1. By modeling motions as autonomous systems

$$ \dot\xi = f(\xi), $$

robots react immediately to perturbations, offering smooth, robust replanning2.

Dynamical system example
Figure: Dynamical system model embedding different ways of performing a task in one single model. The robot follows an arc, a sine, or a straight line starting from different points in the workspace.
Shiferaw, T. (2025) Advanced robotic manipulation with impedance control. MathWorks. Available at: https://ch.mathworks.com/company/technical-articles/enhancing-robot-precision-and-safety-with-impedance-control.html

Nonlinear dynamical systems have recently emerged as a powerful framework for capturing robotic motor skills. In particular, endpoint-to-endpoint behaviors can be encoded directly as time-invariant vector fields, forming reusable “movement primitives” (MPs) that drive a wide array of manipulation tasks. Unlike traditional trajectory planners, DS-based methods naturally absorb disturbances by treating the goal as a globally attracting equilibrium, while the precise motion profiles are acquired from demonstration data.

Classical DS Models

  • Dynamic Movement Primitives (DMP): encodes each degree of freedom separately with a time-dependent forcing term; yields fast one-shot learning but limited coupling across dimensions3.
    DMP formulates motions as a non-autonomous dynamical system. In essence, a DMP augments a simple linear attractor with a learned nonlinear forcing term to reproduce complex trajectories from demonstrations. To guarantee convergence, the nonlinear component is gradually attenuated near the goal by a phase variable, smoothly reverting the system to its stable linear form. However, this external phase-driven modulation can warp the timing of the original motion, limiting DMP’s ability to extrapolate beyond the demonstrated paths.

To address this limitation, more recent approaches adopt time-independent models that maintain the spatial and temporal structure of demonstrations under perturbations. By decoupling motion generation from an explicit phase, these methods focus on “what to imitate” rather than “when to imitate,” enabling robust generalization to unseen regions of the workspace. An appealing alternative is the Stable Estimator of Dynamical Systems (SEDS) 2.

  • Stable Estimator of Dynamical Systems (SEDS): fits a Gaussian Mixture Model (GMM) to demonstrations under convex constraints guaranteeing global asymptotic stability at the goal2. However, its quadratic Lyapunov-function constraint can limit reproduction accuracy when demonstrations violate purely contractive dynamics.
  • Control-Lyapunov Function DS (CLF-DM): learns a Lyapunov candidate by constrained regression, ensuring stability via sum-of-squares certificates4.
  • LAGS-DS (Locally Active, Globally Stable DS): augments a stable global attractor with local, state-dependent modulation for higher fidelity near demonstrations, yet retains global convergence6.
  • Gaussian-Process DS: Bayesian nonparametric vector fields with posterior uncertainty and stability enforced via contraction metrics7.
  • Neural ODEs for DS: parameterize $f(\xi)$ as a continuous-depth neural network, with stability imposed by spectral normalization or contraction theory8.

Benchmarks & Tools

  • LASA Handwriting Dataset: 24 handwriting motions used extensively to compare DS methods9.
  • Toolboxes:
    • EPFL-LASA’s SEDS ROS packages (https://github.com/epfl-lasa/icra-lfd-tutorial)
    • EPFL-LASA’s LAGSDS ROS tasks (https://github.com/epfl-lasa/kuka-lagsds-tasks)

Stability

Within a dynamical systems (DS) framework, achieving system stability alongside accuracy is essential. As robots learn motor skills via imitation learning (IL), robustness becomes paramount: the controller must generalize reliably and continue to converge on the intended behavior despite disturbances or variations. To reinforce stability in DS, three primary approaches are typically employed: Lyapunov functions (LF), Contraction Theory (CT), and diffeomorphic transformations. Each of these methods strengthens the learning system’s resilience by mitigating deviations and external perturbations. In the following, we examine the fundamental principles of these three techniques and their roles in enhancing DS stability.

Lyapunov stability

Lyapunov functions (LFs) provide a scalar measure—often thought of as the “energy” or “potential”—of a dynamical system. In control theory, they are indispensable for proving that a system will remain stable and converge to a target behavior. When applied to imitation learning, LF-based methods seek to construct a function that satisfies the usual Lyapunov conditions, then tune it via optimization (e.g., gradient descent, trust-region algorithms or neural-network training). By showing that this function consistently decreases along the system’s trajectories, these approaches guarantee that the learned policy is both stable and convergent.

To address this, LAGS-DS improves local tracking by allowing state-dependent gains near the demonstration manifold, yet sacrifices some of the stiffness of a pure global attractor6.

The CLF-DM approach reduces conservatism by learning a control Lyapunov function via weighted asymmetric quadratics, yet it applies runtime corrections that can disrupt the learned DS.

Although Artstein and Sontag’s theory of control Lyapunov functions provides the foundation for stability enforcement, balancing precision and robustness in learned systems remains an open challenge.

Lemme et al.’s Neurally Imprinted Stable Vector Fields (NIVF) employ a neurally learned Lyapunov candidate with quadratic programming, achieving high accuracy but only local stability and requiring expensive ex-post verification.

Contraction theory

Contraction theory (CT) offers a powerful means to certify stability and robustness in imitation‐learned controllers. Rather than tracking a single nominal trajectory, CT examines how distance between any two trajectories evolves over time. By identifying a metric under which the system’s dynamics cause all trajectories to shrink toward each other—i.e., to “contract”—one can guarantee exponential convergence to the desired behavior, even in the presence of disturbances or modeling errors.

  • Partial Contraction DS: learns contracting subspaces so that local behaviors track demonstrations, then uses contraction theory for stability7.

Diffeomorphic mapping

Diffeomorphisms, a key concept in differential geometry and topology, are smooth, bijective mappings between differentiable manifolds that preserve differentiability. When you apply a diffeomorphism to the state space of a dynamical system, the resulting transformed system inherits the exact same stability properties as the original. Their power in stability analysis comes from the fact that by reparameterizing the system’s coordinates or state variables, one can often recast a complicated dynamical system into a simpler, hand‐specified stable system (HSDS) whose stability is already established. In this way, picking the right diffeomorphic transformation can greatly simplify the task of proving stability.

  • τ-SEDS: augments SEDS with a diffeomorphic pre-mapping to relax Lyapunov constraints, boosting accuracy while retaining stability10. This framework overcomes the stability–accuracy dilemma by integrating the Lyapunov candidate into a diffeomorphic transformation, yielding provably globally stable DS that faithfully reproduce demonstrations.

Diffeomorphic Mapping for DS

Mapping a simple, hand-designed—but provably stable—DS through a smooth, bijective transformation (a diffeomorphism) allows one to inherit stability while recovering complex accuracy.

Why Diffeomorphic Mapping — Stability–Accuracy Dilemma

Robust DS must satisfy two often-conflicting goals:

  1. Stability: provable global convergence to a target under any perturbation.
  2. Accuracy: faithful reproduction of the demonstrated trajectory.

Khansari-Zadeh et al. first highlighted the stability–accuracy trade-off in SEDS, noting that although their Gaussian-mixture stability constraints guarantee global convergence, “these global stability conditions might be too stringent to accurately model some complex motions”2. Figure 1 illustrates this: the left panel shows C-shaped demonstrations from the LASA dataset overlaid on equipotential contours of the quadratic Lyapunov function, while the right panel superimposes the DS flow (blue arrows), original trajectories (black), and reproductions (red), revealing stable yet imprecise tracking.

Dynamical system example
Figure: The conflict between demonstration data and a DS constrained by a quadratic Lyapunov function. In the left panel, C-shaped trajectories from the LASA dataset are superimposed on the contour lines of the quadratic Lyapunov candidate, revealing their mismatch. The right panel shows the DS flow and its reproductions, which, although guaranteed stable, diverge noticeably from the original demonstrations.
Shiferaw, T. (2025) Advanced robotic manipulation with impedance control. MathWorks. Available at: https://ch.mathworks.com/company/technical-articles/enhancing-robot-precision-and-safety-with-impedance-control.html

Compared to Lyapunov‐function–based and contraction‐theory–based methods, the diffeomorphic‐mapping–based DS method can handle demonstration data on Riemannian manifolds and, by leveraging the properties of the mapping, guarantee that even complex motion models remain globally stable.

Theory of Diffeomorphic Transformations 22

A diffeomorphism $\psi\colon \mathcal{Y}\to \mathcal{X}$ is a smooth, bijective map with a smooth inverse, thereby providing a coordinate transformation between two differentiable manifolds $\mathcal{Y}$ and $\mathcal{X}$. According to Lee 11, any such diffeomorphism can be realized as a flow generated by an infinitesimal generator $\mathbf{V}$, often represented as a vector field on a smooth manifold. Specifically, let $\mathbf{V}\colon \mathbb{R}^d \to \mathbb{R}^d$ be a time-independent vector field and the flow $\gamma\colon \mathbb{R}\times\mathbb{R}^d\to\mathbb{R}^d$ be defined by

$$ \gamma(t,y) = y + \int_{0}^{t} \mathbf{V}\bigl(\gamma(u,y)\bigr)du = x. $$

This flow $\gamma(t,y)$ provides the position $x$ at time $t$ of a trajectory starting at $y$ when $t=0$. For each fixed time $t$, this flow defines a diffeomorphism $\psi_t\colon \mathcal{Y}\to\mathcal{X}$ by $\psi(y):=\gamma(t,y)$. Therefore, the flow defines an invertible mapping, whose inverse can be computed by reversing the direction of time:

$$ \gamma(-t,x) = x + \int_{-t}^{0} \mathbf{V}\bigl(\gamma(u,x)\bigr)du = y. $$

Note that this flow-based diffeomorphism $\psi(y):=\gamma(t,y)$ maps the initial point $y\in\mathcal{Y}$ to the point $x=\gamma(t,y)\in\mathcal{X}$. Furthermore, given a vector field $f\colon \mathcal{X}\to T\mathcal{X}$, where $f(x)$ assigns a tangent vector in $T_x\mathcal{X}$ to each point $x\in\mathcal{X}$, we can use $\psi$ to pull back $f$ to a vector field on $\mathcal{Y}$. Specifically, let $J_{\psi}$ be the Jacobian of $\psi$, then the pullback of $f$ via $\psi$ is

$$ \dot y = J_{\psi}^{-1}f\bigl(\psi(y)\bigr), $$

thereby transforming tangent vectors on $\mathcal{X}$ to corresponding tangent vectors on $\mathcal{Y}$.

If the system on the manifold $\mathcal{Y}$,

$$ \dot y = J_{\psi}^{-1}f\bigl(\psi(y)\bigr), $$

is globally asymptotically stable at an equilibrium $y^*\in\mathcal{Y}$, then the mapped system on $\mathcal{X}$,

$$ x = \psi(y), \quad \dot x = D\psi\bigl(y\bigr)\dot y = D\psi\bigl(\psi^{-1}(x)\bigr)J_{\psi}^{-1}f\bigl(x\bigr), $$

is also globally asymptotically stable at the corresponding equilibrium $x^* = \psi(y^*)\in\mathcal{X}$.

How to build a Diffeomorphic Mapping for DS

Step 1: Build latent space based on data In the latent space, we aim for dynamics that are simple and whose stability is easy to prove. Common choices include:

  1. Linear or Quadratic DS
    • Linearized demonstrations: e.g., t-SEDS, Laplacian-based dimensionality reduction that projects high-dimensional trajectories into a low-dimensional linear space.
    • Stochastic linear dynamics: including FDM, E-Flow, and I-Flow, which approximate demonstrations via linear differential equations with Gaussian models or random terms.
  2. Stable Neural ODEs
    • Modeling latent-space dynamics with Neural ODEs constrained for global asymptotic stability, combining expressiveness with convergence guarantees.
  3. Nonlinear DS and Limit Cycles
    • For cyclic motions (limit cycles), introduce phase-based scaling maps; for surfaces or other manifolds, embed them into the latent space via landmark matching or conformal maps.
  4. When designing the latent space, also consider the latent space’s dimension and order:
    • Non-Euclidean demonstrations (e.g., finger joints, rotations): express them in the latent space using Riemannian manifolds or Lie group structures.
    • Environmental changes and obstacle avoidance: incorporate infinitesimal generators of flows, space curvature, or rotational avoidance terms in the latent dynamics.
    • Second-order or dissipative systems: simulate energy dissipation and inertial effects via phase-based scaling or higher-dimensional Euclidean representations.

Step 2: Train mapping between the original space and the latent space After constructing the latent-space DS, the key is learning an invertible mapping that preserves stability while accurately reproducing demonstration trajectories. Main methods include:

  1. Classical Optimal Methods
    • Large Deformation Diffeomorphic Metric Mapping (LDDMM)
    • Optimal Transport–based mapping
    • Locally weighted translations with geometric constraints
  2. Geometry/Physics-Constrained Methods
    • Riemannian Gaussian Mixture Models for smooth manifold transformations
    • Hamiltonian-based diffeomorphic flows
    • Mappings defined on Lie groups
  3. Neural Network Approaches
    • Normalizing Flows (invertible neural networks): I-Flow, E-Flow, Jacobian-Constrained Networks, Non-Volume-Preserving flows
    • Diffeomorphic Neural Networks: using Neural ODE structures to ensure invertibility and diffeomorphic properties
    • Invertible Residual Networks: approximating invertible mappings with residual structures

Key Challenges:

Although diffeomorphism is theoretically attractive, practical applications must address:

  1. Model Accuracy vs. Dimensionality Curse
    • High accuracy often requires a higher-dimensional latent space, leading to increased training and inference costs.
  2. Approximation Errors
    • Approximating diffeomorphic mappings on Riemannian or non-Euclidean spaces can introduce errors that affect strict stability guarantees.
  3. Practical Deployment
    • How to deploy on real robotic platforms with sufficient speed while handling sensor noise and real-time control requirements.

State-of-the-Art Approaches to Training the Mapping

The current methods for computing diffeomorphisms are mainly flow-based approaches, which generate a series of transport equations to iteratively alter the spatial structure and design a cost function to ensure minimization of the deformation.

Fast diffeomorphic matching (FDM)

FDM propose a new diffeomorphic matching algorithm and use it to learn nonlinear dynamical systems with the guarantee that the learned systems have global asymptotic stability.

Iterative Locally Weighted Matching

A novel approach to diffeomorphic matching is based on diffeomorphic locally weighted translations. This method applies smooth, localized updates iteratively to approximate the diffeomorphism efficiently.

  • Parameters: Fix the number of iterations $K$, with $0 < \mu < 1$ (safety margin) and $0 < \beta \leq 1$ (learning rate). Typically, $K = 150$, $\mu \approx 0.9$, and $\beta \approx 0.5$.
  • Initialization: Set $Z := X$.

At each iteration $j$:

  1. Select the point $p_j$ in $Z$ furthest from its target $q$ in $Y$.
  2. Define the translation $\psi_{\rho_j, p_j, v_j}$ with direction $v_j = \beta (q - p_j)$ and Gaussian RBF kernel, where $\rho_j \in [0, \mu \rho_{\max}(v_j)]$ is optimized to minimize the distance between $\psi_{\rho_j, p_j, v_j}(Z)$ and $Y$.
  3. Update $Z := \psi_{\rho_j, p_j, v_j}(Z)$.

The final diffeomorphism is the composition of all local updates:

$$ \Phi = \psi_{\rho_K, p_K, v_K} \circ \cdots \circ \psi_{\rho_2, p_2, v_2} \circ \psi_{\rho_1, p_1, v_1}. $$


Pseudo-code

Algorithm — Fast Diffeomorphic Matching (FDM)

Input: $X=(x_i){i=0,\dots,N}$ and $Y=(y_i){i=0,\dots,N}$
Parameters: $K \in \mathbb{N}_{>0}, 0 < \mu < 1, 0 < \beta \leq 1$

Initialize: $Z = (z_i)_{i=0,\dots,N}$
Set $Z := X$

for $j = 1$ to $K$ do
$\qquad m := \arg\max_{i \in {0,\dots,N}} \lVert z_i - y_i \rVert$
$\qquad p_j := z_m$
$\qquad q := y_m$
$\qquad v_j := \beta (q - p_j)$
$\qquad \rho_j := \arg\min_{\rho \in [0, \mu \rho_{\max}(v_j)]} \mathrm{dist}\big(\psi_{\rho, p_j, v_j}(Z), Y\big)$
$\qquad Z := \psi_{\rho_j, p_j, v_j}(Z)$
end for

return ${ \rho_j }{j=1,\dots,K}, { p_j }{j=1,\dots,K}, { v_j }_{j=1,\dots,K}$

This iterative matching scheme is both efficient and robust, yielding a smooth diffeomorphism by composing a sequence of locally weighted translations.

With our tutorial code, you can inspect the mapping results and the DS constructed using this method.

compare results
Figure: DS mapping results for the FDM method: the left figure shows the velocity vector field, and the right figure shows the potential field.
compare results
Figure: Grid representations of the latent space and original space, showing that the linear trajectory becomes the desired shape after mapping.

Euclideanizing Flows (E-flow)

Inspired by recent works in density estimation, E-flow propose to represent the diffeomorphism as a composition of simple parameterized diffeomorphisms.

By definition, a diffeomorphism must be bijective and continuously differentiable. To achieve this, we model it as a composition of $K$ mappings:

$$ \psi = \psi_1 \circ \psi_2 \circ \cdots \circ \psi_K, \quad \psi_k : \mathbb{R}^n \to \mathbb{R}^n. $$

Each mapping $\psi_k$ is implemented with a coupling layer that splits the input $z_{k-1}$ into two halves and applies scaling and translation:

$$ \psi_k(z_{k-1}) = \begin{bmatrix} z_{k-1}^a
z_{k-1}^b \odot \exp\big(s_k(z_{k-1}^a)\big) + t_k\big(z_{k-1}^a\big) \end{bmatrix} $$

where $s_k$ and $t_k$ are scaling and translation functions. This guarantees bijectivity and differentiability, so the composition $\psi$ is a valid diffeomorphism.


Learning Objective from Demonstrations

Suppose we have $N$ human demonstrations, each consisting of $T_i$ pairs $(x_{i,t}, \dot{x}_{i,t})$. We aim to learn a dynamical system

$$ \dot{x} = f_{\psi}(x) $$

that reproduces the demonstrations while ensuring stability. With a coordinate transform $y = \psi(x)$, the system becomes a gradient flow

$$ \dot{y} = -\nabla_y \Phi(y), \quad \Phi(y) = \| y - y^* \|, \quad y^* = \psi(x^*). $$

The learning problem reduces to minimizing the trajectory error:

$$ \hat{\theta} = \arg\min_{\theta} \frac{1}{\sum_{i=1}^{N} T_i} \sum_{i=1}^{N}\sum_{t=1}^{T_i} \left| \dot{x}{i,t} - f{\psi_{\theta}}(x_{i,t}) \right|^{2} $$


Kernelized Coupling Layers

To enforce smoothness, we parameterize $s_k$ and $t_k$ with Gaussian kernels:

$$ k(z, z’) = \exp\Big(-\frac{\lVert z - z’ \rVert^2}{2l^2}\Big). $$

Using random Fourier features:

$$ \varphi(z) = \sqrt{\tfrac{2}{m}} [\cos(a_1^T z + b_1), \ldots, \cos(a_m^T z + b_m)]^T, $$

with $a_j \sim \mathcal{N}(0, l^{-2} I)$ and $b_j \sim U(0,2\pi)$. Then,

$$ s_k(z) = \varphi(z)^T W_{s_k}, \quad t_k(z) = \varphi(z)^T W_{t_k}, $$

where $W_{s_k}, W_{t_k}$ are learnable parameters.

With our tutorial code, you can inspect the mapping results and the DS constructed using this method.

compare results
Figure: DS mapping results for the FDM method: the left figure shows the velocity vector field, and the right figure shows the potential field.
compare results
Figure: Grid representations of the latent space and original space, showing that the linear trajectory becomes the desired shape after mapping.

Imitation Flow

ImitationFlow extends the framework of stochastic dynamical systems by integrating normalizing flows into the emission function, providing both stability guarantees and expressive modeling power.
The method is designed to learn stable stochastic dynamical systems from demonstration data while preserving asymptotic stability of the dynamics.


Model Formulation

In the latent space $\mathcal{Z}$, the system evolves according to a stochastic differential equation (SDE):

$$ dz(t) = f_{\phi}(z(t))dt + g_{\phi}(z(t))dB(t), $$

where $f_{\phi}$ and $g_{\phi}$ are the drift and diffusion terms parameterized by $\phi$, and $B(t)$ is a $d$-dimensional Brownian motion.
The observation space $\mathcal{Y}$ is linked to the latent space via a diffeomorphic transformation $h_{\theta}$:

$$ y = h_{\theta}(z), $$

where $h_{\theta}$ is bijective, smooth, and parameterized by $\theta$.
This guarantees that the learned mapping preserves the stability properties of the latent dynamics in the observation space.


Equivalent Dynamics in the Observation Space

Given the Jacobian of the transformation $J_{\theta}(y) = \frac{dz}{dy}$, the stochastic dynamics of $y(t)$ can be rewritten as:

$$ dy(t) = J_{\theta}(y) f_{\phi}\left(h_{\theta}^{-1}(y)\right)dt + J_{\theta}(y) g_{\phi}\left(h_{\theta}^{-1}(y)\right) dB(t). $$

This formulation ensures that the transformed dynamics remain stable while enabling expressive modeling of complex motion patterns.


Learning Algorithm

The goal is to maximize the likelihood of the observed trajectories under the ImitationFlow model:

$$ \psi^* = \arg\max_{\psi={\theta,\phi}} \mathcal{L}_{\psi}(\mathcal{T}), $$

where $\mathcal{T}$ is the set of expert demonstrations and $\mathcal{L}_{\psi}$ is the trajectory likelihood.
By leveraging the change-of-variable rule of normalizing flows, the probability of trajectories in $\mathcal{Y}$ is rewritten in terms of the latent dynamics in $\mathcal{Z}$:

$$ p(y) = p(z) \left|\det \frac{\partial z}{\partial y}\right|. $$

Thus, the learning process consists of two coupled steps:

  1. Estimating the stable latent dynamics parameters $\phi$;
  2. Optimizing the flow transformation $h_{\theta}$ to faithfully reproduce demonstrations in the observation space.

Pseudo-code

Algorithm 1 — ImitationFlow Learning

Input: $\mathcal{T}$ trajectories
Parameters: $\phi$ dynamics, $\theta$ NormalizingFlow

while not converged do
$\qquad \tau_y \leftarrow {\mathcal{T}}$
$\qquad \Delta T \leftarrow \text{Get a sampling time}$
$\qquad \tau_z, \lvert J_{\tau_z}^{-1}\rvert \leftarrow h_{\theta}^{-1}(\tau_y)$
$\qquad z_{(0:T-\Delta T)}, z_{(\Delta T:T)}, z_n \leftarrow \mathrm{SplitTime}(\tau_z, \Delta T)$
$\qquad p(\cdot \mid z_i+\Delta T;\phi), p_n(\cdot;\phi) \leftarrow \mathrm{GetDensFunc}\big(z_{(\Delta T:T)}, z_n\big)$
$\qquad \mathcal{L} = p_n(z_n;\phi)\,\lvert J_{n}^{-1}\rvert \prod_i p\big(z_i \mid z_{i+\Delta T};\phi\big)\,\lvert J_{i}^{-1}\rvert$
$\qquad \Delta\theta, \Delta\phi \propto -\nabla_{\theta}\mathcal{L}, -\nabla_{\phi}\mathcal{L}$
end while

With our tutorial code, you can inspect the mapping results and the DS constructed using this method.

compare results
Figure: DS mapping results for the FDM method: the left figure shows the velocity vector field, and the right figure shows the potential field.
compare results
Figure: Grid representations of the latent space and original space, showing that the linear trajectory becomes the desired shape after mapping.

Programming exercise for classical methods

Tutroial code repository

We create a code repository where you can test and try different Diffeomorphic mapping methods for DS training.

GitHub – RLoad/Tutorial_DS_mapping

Here we list the method we used in our code structure, and thank them for their open source code repositories.

Methods List

  • Fast Diffeomorphic Matching (FDM): Perrin & Schlehuber-Caissier (2016) introduce FDM to align a reference attractor to the demonstration manifold by solving large-deformation diffeomorphism matching with stability certificates12.

  • Euclideanizing Flows (E-FLOW): Rana, Fox & Qiu (2020) view diffeomorphism learning as a normalizing flow: compose simple parameterized maps so that $x=\phi(z)$, with $z$ following a linear stable DS. Stability follows directly from the base flow13.

  • Imitation Flows (I-FLOW) (On going): Urain et al. (2020) extend E-FLOW to stochastic stabilization, pushing a simple contracting SDE through a learnable diffeomorphism via normalizing flows, ensuring both stability and expressivity14.

  • Riemannian Stable DS (RSDS) (On going): Saveriano, Abu-Dakka & Kyrki (2023) learn diffeomorphic maps on manifolds (e.g. orientation on $\mathrm{SO}(3)$) via neural manifold ODEs, enforcing Lyapunov stability on Riemannian manifolds15.

  • More is coming …

Code Structure Overview

The purpose of this code framework is fourfold:

  • Define the core scenario: DS-based skill learning and generalization via geometric configuration
  • Provide a concise yet representative example to demonstrate key concepts
  • Offer modular code and rich visualizations to facilitate learner understanding
  • Enable method comparison and metric selection for objective evaluation

The repository is organized into modular components that follow the stages outlined in the DS diffeomorphic mapping tutorial:

  1. Toy Data Generation
    • Generates synthetic trajectories based on LASA handwriting data.
    • Supports both 2D S-shaped curves and 3D curved surfaces for refinement.
    • Visualizes raw and target trajectories.
    toy data
    Figure: The toy data generated by our code structure. Left: 2D LASA handwrite data; Right: 3D toy data
  2. Mapping Methods choice
    • τ-SEDS: Stable Estimation of Dynamical Systems using Gaussian mixture models.
    • Fast Diffeomorphic Mapping: Efficient algorithms for time-variant diffeomorphic transformations.
    • Euclideanizing Flows: Flow-based models that map curved dynamics into Euclidean latent spaces.
    • Imitation Flows: Neural network–based residual flows for trajectory imitation.
    • More …:
  3. Training Pipeline
    • Constructs latent space structure and prepares paired datasets. We can construct the latent space using either a linear or a quadratic form.
    latent space
    Figure: The latent-space vector field and its potential energy. Left: Vector field; Right: Potential energy
    • Obtain the training dataset for both latent and original spaces.
    linearize
    Figure: The original dataset and its linearized counterpart.
    • Selects model parameters and network architecture via command-line interface.
    NN structure
    Figure: Common neural network architectures commonly used for training DS mappings.
    • Design the neural network architecture (Option).
    • Training scripts (train_*.py) log progress, plot loss curves, and save checkpoints.
  4. Evaluation & Visualization
    • Computes metrics: Root Mean Squared Error (RMSE), Dynamic Time Warping Distance (DTWD), and Fréchet Distance (FD).
    • Vector field simulation to test learned DS trajectories against ground truth.
    • Plotting utilities for 2D, 3D, and vector field visualizations.

    here we will have a interaction interface to direct test code

  5. Utilities
    • Common functions for data loading, logging, and plotting.
    • Configuration loader and argument parsers.

Getting Started
# Clone the repository
git clone https://github.com/RLoad/Tutorial_DS_mapping.git
cd Tutorial_DS_mapping

# Create a virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

Want to implement a real project?

[TODO]

Credits

References

  1. Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.
  2. Khansari-Zadeh, S. M., & Billard, A. (2011). Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Transactions on Robotics, 27(5), 943–957.
  3. Pastor, P., Hoffmann, H., Asfour, T., & Schaal, S. (2009). Learning and generalization of motor skills by learning from demonstration. In 2009 IEEE International Conference on Robotics and Automation (ICRA) (pp. 763–768).
  4. Khansari-Zadeh, S. M., & Billard, A. (2014). Learning control Lyapunov functions to ensure stability of dynamical system–based robot reaching motions. Robotics and Autonomous Systems, 62(6), 752–765.
  5. Khansari-Zadeh, S. M., & Billard, A. (2012). A dynamical system approach to real-time obstacle avoidance. Autonomous Robots, 32(4), 433–454.
  6. Kronander, K., Khansari-Zadeh, S. M., & Billard, A. (2015). Incremental motion learning with locally modulated dynamical systems. Robotics and Autonomous Systems, 70, 52–62.
  7. Kolter, J. Z., & Manek, G. (2019). Learning stable deep dynamics models. Advances in Neural Information Processing Systems, 32, 11128–11136.
  8. Kang, Q., Song, Y., Ding, Q., & Tay, W. P. (2021). Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks. In Advances in Neural Information Processing Systems, 34, 14925–14937.
  9. Khansari-Zadeh, S. M., & Billard, A. (2014). The LASA handwriting dataset for evaluation of trajectory generation algorithms. Technical Report, LASA Lab, EPFL.
  10. Neumann, K., & Steil, J. J. (2015). Learning robot motions with stable dynamical systems under diffeomorphic transformations. Robotics and Autonomous Systems, 70, 1–15.
  11. Lee, J. M. (2013). Introduction to Smooth Manifolds (2nd ed., Graduate Texts in Mathematics, Vol. 218). Springer.
  12. Perrin, N., & Schlehuber‐Caissier, P. (2016). Fast diffeomorphic matching to learn globally asymptotically stable nonlinear dynamical systems. Systems & Control Letters, 96, 51–59.
  13. Rana, M. A., Li, A., Fox, D., Boots, B., Ramos, F., & Ratliff, N. (2020). Euclideanizing flows: Diffeomorphic reduction for learning stable dynamical systems. In Learning for Dynamics and Control (pp. 630–639). PMLR.
  14. Urain, J., Ginesi, M., Tateo, D., & Peters, J. (2020). ImitationFlow: Learning deep stable stochastic dynamical systems by normalizing flows. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5231–5237).
  15. Saveriano, M., Abu-Dakka, F. J., & Kyrki, V. (2023). Learning stable robotic skills on Riemannian manifolds. Robotics and Autonomous Systems, 169, 104510.
  16. Gupta, S., Nayak, A., & Billard, A. (2022). Learning high dimensional demonstrations using Laplacian eigenmaps. IEEE Robotics and Automation Letters, 7(4), 10219–10226.
  17. Ravanbakhsh, H., & Sankaranarayanan, S. (2019). Learning control-Lyapunov functions from counterexamples and demonstrations. Autonomous Robots, 43, 275–307.
  18. Jin, Z., Si, W., Liu, A., Zhang, W. A., Yu, L., & Yang, C. (2023). Learning a flexible neural energy function with a unique minimum for globally stable and accurate demonstration learning. IEEE Transactions on Robotics, 39(6), 4520–4538.
  19. Zhi, W., Lai, T., Ott, L., & Ramos, F. (2022). Diffeomorphic Transforms for Generalised Imitation Learning. In Learning for Dynamics and Control, 23, 508–519.
  20. Huber, L., Slotine, J. J., & Billard, A. (2023). Avoidance of concave obstacles through rotation of nonlinear dynamics. IEEE Transactions on Robotics, 40, 1983–2002.
  21. Boumal, N. (2023). An Introduction to Optimization on Smooth Manifolds (2nd ed.). Cambridge University Press. ISBN 978-1108426292.

Want to learn more ? –> Free Online Courses TODO?

Books

Back to Top