3  Bifurcation Theory Approach

As we have mentioned several times before, the fundamental goal of this thesis is to study the computational aspects of continuation/detection of stable and unstable manifolds of periodic orbits (and what challenges come with it). As such, we shall properly discuss what a continuation is and how we compute it. We begin with a theoretical treatment of bifurcation theory, focused solely on cycle continuation and stability identification, followed by original results concerning coupled models of neurons. The theoretical introduction is mainly based on [1], [2], and [3], but many more related articles are cited throughout the discussion.

We should also disclose that while this chapter is named “Bifurcation Theory Approach”, we will spend relatively little time discussing actual bifurcations. Our main focus lies in the continuation of (un)stable cycles.

Although we primarily rely on existing implementations (MATCONT, see [4] and [5], for continuation in ODEs and DDE-BIFTOOL, see [6], in DDEs), in this chapter we also provide a rather thorough theoretical and numerical overview of the underlying methods and algorithms. Moreover, honoring the saying “do not reinvent the wheel”, we can base our conclusions on peer-reviewed and community-tested tools, further strengthening our claims. Lastly, a novel technique along with its implementation for the continuation of shift between two oscillators is provided (through a referenced co-authored article).

3.1 Theory of Localizations

Although the primary subjects of our study are periodic orbits, we will first delve into the localization of equilibrium. Indeed, one might recall we discussed the relation of equilibria (or fixed points) in discrete dynamical systems and periodic orbits of continuous-time dynamical systems in Section 1.1.4. This fact alone suggests it is beneficial to first study the simpler case of equilibria localization, building tools to later tackle the periodic case.

3.1.1 Dynamical Systems

Equilibrium Localization

To start relatively simply, consider a continuous-time dynamical system induced by an autonomous ODE of the form (1.4), i.e., \[ \dvi x = \vi f(\vi x), \; \vi x \in \R^n. \] Its equilibrium \(\vi x^*\) is then given by \[ \vi f(\vi x^*) = 0. \tag{3.1}\] Note that similarly by setting \(\vi f(\vi x) = \vi g(\vi x) - \vi x\) we can get the same equilibrium condition for a discrete-time dynamical system \(\vi x \mapsto \vi g(\vi x)\).

It is easy to see that if the equilibrium \(\vi x^*\) is stable, one can integrate the system (1.4) forward in time starting from some point \(\vi x\) in the basis of attraction of \(\vi x^*\), since \[ \lim_{t \to \infty} \norm{\vi \evolOp(t, \vi x) - \vi x^*} = 0 \] by Definition 1.11. Similarly, if \(\vi x^*\) is totally unstable (i.e., repelling from all directions), we can simply reverse time (as we are in the ODE case) and repeat the procedure1.

Nonetheless, in general, this approach will not work, as equilibria are rarely stable. The standard procedure is to start from a point in a small neighborhood around the equilibrium (determined either analytically or via numerical integration) and construct a sequence of points \(\Seqnc {\vi x^{i}} {i = 0} \infty\), which converges to \(\vi x^*\) under general conditions. For this purpose, we can use Newton’s method, see Section 1.3.5, or one of its modifications.

Suppose we have found \(\vi x^*\) with the desired accuracy. Our next task is then to determine its stability, i.e., how the phase portrait behaves in its vicinity. By Theorem 1.3, Theorem 1.2, and the following discussion, we know that eigenvalues of the Jacobian matrix \(\Jacobi^*\) determine the stability of \(\vi x^*\). In other words, we need to compute the roots of the characteristic polynomial \[ p(\lmbd) = \det (\Jacobi^* - \lmbd \ident). \] Note that there are other methods of computing stability, which in particular do not require eigenvalues, but only computation of certain determinants of \(\Jacobi^*\). We refer an interested reader to [1], page 470.

Periodic Orbit Localization

As we have seen, localization of an equilibrium from a sufficiently close initial guess was rather simple both theoretically and computationally. Unfortunately, for limit cycles, the situation is more complicated. For clarity, [7] was heavily used as a source material for this section.

Assume again we have a dynamical system prescribed by (1.4), \[ \dvi x = \vi f(\vi x), \; \vi x \in \R^n, \] such that it has an isolated periodic orbit \(L_0\). Let \(\vi x^0(t + T_0) = \vi x^0(t)\) be the corresponding solution with minimal period \(T_0 > 0\). Further assume that the cycle \(L_0\) has corresponding multipliers, see Section 1.1.4 and Lemma 1.1, \[ \mu_1, \dots, \mu_n \in \C, \] which are by Theorem 1.7 the eigenvalues of the \(n\times n\) monodromy matrix \(\vi M(T_0)\), where \(\vi M(t)\) satisfies \[ \rcases{ \dvi M(t) &= \jacobi \vi f (t) \vi M(t), \\ \vi M(0) &= \ident_n. } \tag{3.2}\] Recall there is always a trivial multiplier \(\mu_1 = 1\). Also, if \(\absval{\mu_i} < 1\) for all \(i \in \oneToN{n}\), then the cycle is stable. In contrast, if there exists \(i \in \oneToN{n}\) such that \(\absval{\mu_i} > 1\), the cycle is unstable (and if all multipliers have a modulus greater than one, the cycle is called totally unstable or repelling).

If the system features a stable limit cycle \(L_0\), then we can find it again by numerical integration from a point in its basin of attraction. In contrast to the case of equilibria, backward-time numerical integration will, in general, not help us find a repelling periodic orbit.

Suppose we have a system in 2-dimensional state space, which possesses a periodic orbit, see Figure 3.1. From the theory of differential equations, we know that in the case of an autonomous ODE (1.4), its trajectories do not intersect. Thus a periodic orbit \(L_0\) constitutes a Jordan curve2. Then by Jordan curve theorem, see [8], such orbit partitions the state space into inside and outside of the cycle (and the cycle itself)3. Thus if \(L_0\) is attracting, the trajectories are guaranteed to converge to \(L_0\) forward in time, whereas for repelling \(L_0\), they will converge backward in time.

Figure 3.1: Unstable periodic orbit \(L_0\) of a 2-dimensional dynamical system.

On the other hand, for three- or more dimensional dynamical systems, a periodic orbit does not partition the state space (or any local neighborhood). Then, we cannot guarantee a convergence to the repelling cycle by backward-time integration no matter the initial vicinity. Moreover, it is needless to say the numerical integration localization approach will surely fail if the cycle is neither stable nor repelling. In other words, we cannot even detect the cycle if it is not attracting. Note that this very fact is the crucial support for the bifurcation theory-based approach. If our studied system has a stable cycle for some parameters, then we can continue with respect to a given parameter. Should the periodic orbit become unstable, this approach, as we shall see later, will still allow us to detect and continue the cycle manifold. However, no matter the modifications, this is simply not possible with a simulation-on-grid approach as in Chapter 2.

From now on, we will assume we know the location of the periodic orbit approximately. A correction method, most often Newton-Raphson, is then applied repeatedly until satisfactory convergence is achieved. This is a rather common scenario in practice, where we often know the location of the cycle for a given parameter value and wish to “continue” by varying the parameter and correcting the cycle afterward. This process is commonly referred to as continuation.

Usually, we take the period \(T\) of \(L_0\) as an unknown and formulate the boundary-value problem (BVP) on a fixed interval. In particular, assume \(T\) is a parameter and construct a “time-normalized” system \[ \deriv {\vi u} {s} = T \vi f(\vi u), \; s \in [0,1], \tag{3.3}\] which rescales (1.4) by a time-scaling factor \(T\), such that the new time is denoted \(s\). Now, if a solution \(\vi u(s)\) to (3.3) also satisfies the periodic boundary conditions \[ \vi u(0) = \vi u(1), \tag{3.4}\] then it corresponds to a \(T\)-periodic solution of (1.4). However, these two conditions do not prescribe the period orbit of (1.4) uniquely. Indeed, any time shift of the solution to the BVP (3.3), (3.4) also determines the same periodic orbit.

Therefore, an extra condition, called the phase condition, must be added. Such condition is most often written in the form \[ \Phi[\vi u] = 0, \tag{3.5}\] where \(\Phi\) is a scalar functional defined on the space of periodic solutions. The exact choice of phase condition is up to the user, but the most frequent one is the integral phase condition \[ \Phi[\vi u] = \int_0^1 \scal {\vi u(s)} {\dvi v(s)} \dd s, \tag{3.6}\] where \(\vi v(s) \in \contf{[0, 1], \R^n}{}\) is the reference period-one solution.

Lemma 3.1 The integral phase condition (3.6) is a necessary condition for a local minimum of the time-shift \(L_2\)-distance of period-1 smooth functions \(\vi u, \vi v : \R \to \R^n\) \[ \rho(\sigma) = \int_0^1 \norm{\vi u(s + \sigma) - \vi v(s)}_2^2 \dd s, \] such that the minimum is obtained at shift \(\sigma = 0\).

Proof. See [7], lemma 11.

We can now collect all the conditions to obtain the periodic BVP \[ \rcases{ \dvi u(s) &= T \vi f(\vi u(s)),\; s \in [0,1] \\ \vi u(0) &= \vi u(1), \\ \int_0^1 \scal {\vi u(s)} {\dvi v(s)} \dd s &= 0. } \tag{3.7}\] If \((\vi u(\cdot), \vi T_0) \in \contf {[0,1], \R^n} 1 \times \R\) satisfies (3.7), then \(\vi x(t) = \vi u\brackets{\frac t {T_0}}\) gives the \(T_0\)-periodic solution of (1.4) with \(\vi x(0) = \vi u(0)\).

Moreover, for the fundamental matrix solution \(\vi M(t)\) of (1.4) we define \(\vi \Gamma(s)\), such that for the monodromy matrix of \(L_0\) holds \(\vi M(T) = \vi \Gamma(1)\) and \[ \dvi \Gamma(s) - T \jacobi \vi f(\vi u(s)) \vi \Gamma(s) = \vi 0, \; \vi \Gamma(0) = \vi \ident_n. \tag{3.8}\] The eigenvalues of \(\vi \Gamma(1)\) correspond exactly to the aforementioned multipliers of the cycle \(L_0\). We also define the adjoint monodromy matrix \(\vi \Psi(1)\) as the solution of \[ \dvi \Psi(s) - T \jacobi \vi f\Tr(\vi u(s)) \vi \Psi(s) = \vi 0, \; \vi \Psi(0) = \vi \ident_n \] evaluated at 1. It follows that \[ \vi \Psi(s) = \brackets{\vi \Gamma(s)\Inv}\Tr. \tag{3.9}\] In general, any solution \(\vi \xi \in \contf {[0,1], \R^n} {1}\) of an inhomogeneous linear system \[ \dvi \xi - T \jacobi \vi f(\vi u(s)) \vi \xi = \vi b(s), \] where \(\vi b \in \contf {\R, \R^n} {0}\), can be written as (by (3.9))4 \[ \vi \xi(s) = \vi \Gamma(s) \brackets{\vi \xi(0) + \int_0^{s} \vi \Gamma\Inv(\sigma) \vi b(\sigma) \dd \sigma} = \vi \Gamma(s) \brackets{\vi \xi(0) + \int_0^s \vi \Psi\Tr(\sigma) \vi b(\sigma) \dd \sigma}. \tag{3.10}\]

Definition 3.1 A cycle \(L\) is called simple if the trivial multiplier \(\mu_1 = 1\) has algebraic multiplicity5 equal to 1.

Let \(\vi q_0, \vi p_0 \in \R^n\) be the left and right eigenvectors of the monodromy matrix corresponding to the trivial multiplier \(\mu_1\), \[ \begin{aligned} (\vi \Gamma(1) - \ident_n) \vi q_0 = (\vi \Psi(1) - \ident_n) \vi p_0 & = \vi 0, \\ (\vi \Gamma(1) - \ident_n)\Tr \vi p_0 = (\vi \Psi(1) - \ident_n)\Tr \vi q_0 & = \vi 0, \end{aligned} \] such that \(\norm{\vi p_0}_2 = \norm{\vi q_0}_2 = 1\). Also, \(\vi q_0 = c_0 \vi f(\vi u(0))\) for \(c_0 \in \R \setminus \set{0}\), which follows from (3.8) using (3.3), (3.4): \[\begin{align*} \partialOp {s} \brackets{c_0 \vi f(\vi u(0))} - T \vi f(\vi u(1)) c_0 \vi f(\vi u(0)) &= c_0 \jacobi \vi f(\vi u(0)) \dvi u(0) - T \jacobi \vi f(\vi u(0)) c_0 \vi f(\vi u(0)) \\ &= c_0 \jacobi \vi f(\vi u(0)) T \vi f(\vi u(0)) - T \jacobi \vi f(\vi u(0)) c_0 \vi f(\vi u(0)) \\ &= \vi 0. \end{align*}\]

As we have mentioned, the workflow for localization of a periodic orbit is to start from an initial guess \((\vi u, T)\), which we correct by \((\vi w, S) \in \contf {[0,1], \R^n} {1} \times \R\), i.e., \[ (\vi u, T) \mapsto (\vi u + \vi w, T + S), \] where \((\vi w(\cdot), S)\) is the solution of the linearized inhomogeneous BVP (for reference see (3.7)) \[ \rcases{ \dvi w(s) - T \jacobi \vi f(\vi u(s)) \vi w - S \vi f(\vi u(s)) &= - \dvi u(s) + T \vi f(\vi u(s)), \; s \in [0,1], \\ \vi w(0) - \vi w(1) &= - \vi u(1) + \vi u(0), \\ \int_0^1 \scal {\dvi v(\sigma)} {\vi w(\sigma)} \dd \sigma &= - \int_0^1 \scal {\dvi v(\sigma)} {\vi u(\sigma)} \dd \sigma. } \tag{3.11}\] The left-hand side of (3.11) can be expressed as a matrix operator of the form \[ \underbrace{\bmtr{ \oper{\jacobi} - T \jacobi \vi f(\vi u) & -\vi f(\vi u) \\ \evalOp{0} - \evalOp{1} & 0 \\ \testInt{\dvi v} & 0 }}_{\oper L_{\vi u, T}} \mtr{ \vi w \\ S }, \] where \(\oper{\jacobi}\) denotes the differentiation operator, \(\evalOp{a}\) is the evaluation operator at \(t = a\), i.e., \(\evalOp{a} \vi w = \vi w(a)\), and \[ \testInt{\dvi v} \vi w = \int_0^1 \scal{\dvi v(\sigma)} {\vi w(\sigma)} \dd \sigma. \]

By taking \(\vi v = \vi u\), where \(\vi u\) is a solution of (3.7), we get \(\dvi v = \dvi u = T \vi f(\vi u)\). Moreover, for the following theorem, one might choose \(\vi v\) sufficiently close to such \(\vi u\) (most importantly when \(\vi v\) is a reference period-1 function, for example, the solution in the previous step of continuation). Lastly, the replacement of \(T \vi f(\vi u)\) by \(\vi f(\vi u)\) does not change the fundamental properties of the operator \(\oper{L}_{\vi u, T}\).

Theorem 3.1 If \((\vi u(\cdot), T)\) corresponds to a simple cycle, then the operator \[ \oper{L}_{\vi u, T} = \bmtr{ \oper{\jacobi} - T \jacobi \vi f(\vi u) & - \vi f(\vi u) \\ \evalOp{0} - \evalOp{1} & 0 \\ \testInt{\vi f(\vi u)} & 0 }, \] from \(\contf {[0,1], \R^n} {1} \times \R\) into \(\contf {[0,1], \R^n} {1} \times \R^n \times \R\) is one-to-one and onto.

Proof. See [7], page 36.

By Theorem 3.1 we know (3.11) can be collectively rewritten as \[ \oper{L}_{\vi u, T} \mtr{\vi w \\ S} = \vi A_{\vi u, T}, \tag{3.12}\] where \(\vi A_{\vi u, T}\) captures the right-hand side of (3.11), and that \(\oper{L}_{\vi u, T}\) is regular. Thus, an appropriate correction \((\vi w, S)\) can indeed be found by Newton’s method, see Section 1.3.5 (assuming we can find a suitable discretization, as (3.12) is infinite-dimensional).

This approach can be extended to the continuation of a limit cycle branch of a system \[ \dvi x = \vi f(\vi x, \alpha), \; \vi x \in \R^n, \; \alpha \in \R, \tag{3.13}\] with respect to (w.r.t.) parameter \(\alpha \in \R\) as the solution to the following BVP (\(\vi u_{\text{old}}\) denotes the period-1 cycle for the previous value of \(\alpha\)): \[ \rcases{ \dvi u(s) - T \vi f(\vi u(s), \alpha) &= \vi 0, \; s \in [0,1], \\ \vi u(0) - \vi u(1) &= \vi 0, \\ \int_0^1 \scal{\vi u(\sigma)} {\dvi u_{\text{old}}(\sigma)} \dd \sigma &= 0. } \tag{3.14}\] Furthermore, from Theorem 3.1 and the implicit function theorem, we obtain that a simple cycle has locally unique continuation w.r.t. parameter \(\alpha\). Again, we can define a corresponding operator to (3.14), \[ \bmtr{ \oper{\jacobi}_{\vi u} - T \jacobi_{\vi u} \vi f(\vi u, \alpha) & - \vi f(\vi u, \alpha) & - T \pDeriv {\vi f} {\alpha}(\vi u, \alpha) \\ \evalOp{0} - \evalOp{1} & 0 & 0 \\ \testInt{\dvi u_{\text{old}}} & 0 & 0 }, \tag{3.15}\] which then has a one-dimensional null space for a simple cycle \(\vi u\).

To solve (3.7) (or (3.14)) numerically, we must reduce it to a finite-dimensional problem (as opposed to searching in the infinite-dimensional space of periodic functions \(\vi u\)), that is, we need to choose a discretization. Although shooting, multiple shooting, and finite differences are sometimes used, the most common and most capable is the method of orthogonal collocation.

Consider for simplicity the BVP (3.7). We shall introduce a partitioning of the interval \([0,1]\) by \(N - 1\) mesh points, i.e., \[ 0 = s_0 < s_1 < \dots < s_N = 1. \] The primary goal of the orthogonal collocation is to approximate the true solution \(\vi u\) by piecewise-differentiable continuous function, which is defined as a vector polynomial \(\vi u^{(j)}(s)\) of maximal degree \(m\) within each subinterval \([s_j, s_{j+1}]\), \(j = 0, 1, \dots, N-1\). Such polynomials can be specified by \(m\) collocation points \[ s_j < \zeta_{j,1} < \zeta_{j,2} < \dots < \zeta_{j,m} < s_{j+1} \] belonging to each subinterval, where the approximate solution must satisfy the time-normalized ODE system (3.3) exactly. That is, we require \[ \evaluateAt{\deriv {\vi u^{(j)}} s} {s = \zeta_{j,i}} = T \vi f(\vi u^{(j)}(\zeta_{j,i})) \tag{3.16}\] for \(i = 1, \dots, m\), \(j = 0,1,\dots, N-1\). To put it another way, we are trying to find \(N\) polynomial splines of degree \(m\) to approximate the true limit cycle. As the theory of splines tells us, it is advantageous to represent a given vector polynomial \(\vi u^{(j)}\) by vectors of (unknown) solutions \[ \vi u^{j,k} = \vi u^{(j)}(s_{j,k}) \in \R^n, \; k = 0, 1, \dots, m, \] where \(s_{j,k}\) are representation points6, i.e., nodes of equidistant partitioning of the interval \[ s_j = s_{j,0} < s_{j,1} < \dots < s_{j,m-1} < s_{j,m} = s_{j+1}, \] implying \(\vi u^{j,m} = \vi u^{j+1,0}\), given by \[ s_{j,k} = s_j + \frac k m (s_{j+1} - s_{k}), \; j = 0,1,\dots,N-1, k = 0,1,\dots,m. \] This yields the following formulation of \(\vi u^{(j)}(s)\) as interpolation between values at representation points, \[ \vi u^{(j)}(s) = \sum_{i = 0}^n \vi u^{j,i} \lagrPoly_{j,i}(s), \tag{3.17}\] where \(\lagrPoly_{j,i}(s)\) are the Lagrange basis polynomials on \([s_j, s_{j+1}]\), \[ \lagrPoly_{j,i}(s) = \prod_{k = 0, k \neq i}^m \frac {s - s_{j,k}} {s_{j,i} - s_{j,k}} \implies \lagrPoly_{j,i}(s_{j,k}) = \indicator{i = k} = \lcases{ 1, & i = k, \\ 0, & i \neq k. } \]

Using (3.17), we can translate the problem of finding a vector polynomial (3.16) into a problem of determining the coefficients \(\vi u^{j,i}\). Similarly, we can discretize the periodicity and phase conditions (3.4), (3.6), yielding respectively \[ \vi u^{0,0} = \vi u^{N-1, m} \tag{3.18}\] and \[ \sum_{j = 0}^{N-1} \sum_{i = 0}^{m-1} \omega_{j,i} \scal{\vi u^{j,i}} {\dvi v^{j,i}} + \omega_{N,0} \scal{\vi u^{N,0}} {\dvi v^{N,0}} = 0, \tag{3.19}\] where \(\dvi v^{j,i}\) are the values of the derivative of the reference period-1 solution at representation points \(s_{j,i}\) and \(\omega_{j,i}\) are the Gauss-Legendre quadrature coefficients7. By combining (3.16), (3.17), (3.18), and (3.19) we get a system (3.20) of \(nmN + n + 1\) scalar equations for the unknown values \(\vi u^{j,i} \in \R^n\) at representation points \(s_{j,i}\) (there are \((mN + 1)\) values \(\vi u^{j,i}\)) and for the unknown period \(T\), \[ \rcases{ \sum_{i = 0}^m \vi u^{j,i} \lagrPoly'_{j,i}(\zeta_{j,k}) - T \vi f\brackets{\sum_{i = 0}^m \vi u^{j,i} \lagrPoly_{j,i}(\zeta_{j,k})} &= \vi 0_n, \\ j = 0, 1, \dots, N-1, \; k = 1, \dots, m, & \\ \vi u^{0,0} - \vi u^{N-1, m} &= \vi 0_n, \\ \sum_{j = 0}^{N-1}\sum_{i = 0}^{m-1} \omega_{j,i} \scal{\vi u^{j,i}}{\dvi v^{j,i}} + \omega_{N,0} \scal{\vi u^{N,0}} {\dvi v^{N,0}} &= 0. } \tag{3.20}\] The resulting finite-dimensional system (3.20) is nonlinear, thus we solve it by Newton’s method (or some modification exploiting the block structure of Jacobian, which we will discuss later) — this problem is well-posed by Theorem 3.1 and the discussion afterward.

So far, we have translated (3.7) into a finite-dimensional problem, but we have neglected a discussion of a crucial choice of the collocation points \(\set{\zeta_{j,i}}\). The distribution of collocation points affects the approximation, so a natural goal is to minimize its error. It can be shown that the optimal choice is to position them at so-called Gauss points, which are the roots of \(m\)-th degree Legendre polynomial corresponding to the subinterval \([s_j, s_{j+1}]\). For more information, we refer the reader to [9]. These roots originally belong to the interval \([-1, 1]\), but can be easily transformed to each \([s_j, s_{j+1}]\). Furthermore, the Legendre polynomials form an orthogonal system on the interval \([-1, 1]\) (or its transformation), which lends the name to the orthogonal collocation method.

Let us note the collocation at Gauss points leads to a very highly accurate approximation of a smooth solution of (3.7) by (3.20) at mesh points, namely \[ \norm{\vi u(s_j) - \vi u^{j, 0}} = O(h^{2m}), \] where \(h = \max_{0 \leq j \leq N-1} (s_{j+1} - s_j)\).

Let us turn our attention back to the discretized BVP system (3.20) and consider its continuation variant (i.e., the discretization of (3.14)), which can be written as \(\vi H(\vi X) = \vi 0\) for appropriate \(\vi H\) and \(\vi X = \brackets{\set{\vi u^{j,k}}, T, \alpha} \in \R^{mnN + n + 2}\) corresponding to discretized \(\vi u\), period \(T\) and the continuation parameter \(\alpha\). Its Jacobian matrix \(\jacobi \vi H\) reads (for \(n = 2\), \(N = 3\), and \(m = 2\))8 \[ {\scriptsize \mtr{ \vi u^{0,0} & & \vi u^{0,1} & & \vi u^{1,0} & & \vi u^{1,1} & & \vi u^{2,0} & & \vi u^{2,1} & & \vi u^{3, 0} & & T & \alpha \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & & & & & & & & & & & \nonzeroEl & \nonzeroEl & & \\ \nonzeroEl & \nonzeroEl & & & & & & & & & & & \nonzeroEl & \nonzeroEl & & \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl }}, \tag{3.21}\] where \(\nonzeroEl\) denotes a generally non-zero element. Thus \(\jacobi \vi H\) is sparse, and because it corresponds to the linear operator (3.15) it also has a one-dimensional null-space satisfying \(\vi H(\vi X) = \vi 0\) (at generic points). Let us note that by later also adding a constraint equation stemming from the chosen continuation method, see Section 3.2, we will get a fully determined system.

The Jacobian matrix \(\jacobi \vi H\) is crucial for Newton’s method, and its properties deeply affect the computation. Indeed, assuming the residual formulation of Newton’s method (1.57), we need to be able to effectively solve the system \(\jacobi \vi H \cdot \vi X = \vi R\). Fortunately, in the ODE case, we can perform so-called condensation of parameters, see [10] and [11], eliminating the dependence on intermediate variables (i.e., \(\vi u^{j, k}\) for \(k = 1, \dots, m-1\)) utilizing Gaussian elimination performed on rows. This yields (\(\zeroEl\)’s mark eliminated elements) \[ {\scriptsize \mtr{ \vi u^{0,0} & & \vi u^{0,1} & & \vi u^{1,0} & & \vi u^{1,1} & & \vi u^{2,0} & & \vi u^{2,1} & & \vi u^{3, 0} & & T & \alpha \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ + & + & \zeroEl & \zeroEl & + & + & & & & & & & & & + & + \\ + & + & \zeroEl & \zeroEl & \zeroEl & + & & & & & & & & & + & + \\ & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & + & + & \zeroEl & \zeroEl & + & + & & & & & + & + \\ & & & & + & + & \zeroEl & \zeroEl & \zeroEl & + & & & & & + & + \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ & & & & & & & & + & + & \zeroEl & \zeroEl & + & + & + & + \\ & & & & & & & & + & + & \zeroEl & \zeroEl & \zeroEl & + & + & + \\ + & + & & & & & & & & & & & + & + & & \\ + & + & & & & & & & & & & & + & + & & \\ + & + & \zeroEl & \zeroEl & + & + & \zeroEl & \zeroEl & + & + & \zeroEl & \zeroEl & + & + & + & + \\ + & + & \zeroEl & \zeroEl & + & + & \zeroEl & \zeroEl & + & + & \zeroEl & \zeroEl & + & + & + & + }}, \tag{3.22}\] which has a decoupled subsystem for \(+\) signs. Hence, we can first solve the subsystem given by \(+\)’s, and follow up by solving for the intermediate variables afterward, see [11]. This significantly reduces the dimensionality and thus also memory requirements.

Assume that we have corrected (in, possibly, multiple steps of the Newton-Raphson method) \(\vi X = \brackets{\set{\vi u^{j,k}}, T, \alpha}\) converging onto a new point \[ \vi X_c = \brackets{\set{\vi u_c^{j,k}}, T_c, \alpha_c} \] on the branch of the limit cycle. Then \(\vi X_c\) approximates a solution to (3.14). Moreover, we can further perform Gaussian elimination on rows of \(\jacobi \vi H(\vi X_c)\), see (3.22) and [7], to obtain \[ {\scriptsize \mtr{ \vi u^{0,0} & & \vi u^{0,1} & & \vi u^{1,0} & & \vi u^{1,1} & & \vi u^{2,0} & & \vi u^{2,1} & & \vi u^{3, 0} & & T & \alpha \\ \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \zeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \zeroEl & \zeroEl & \zeroEl & \nonzeroEl & & & & & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & \nonzeroEl & \nonzeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & & & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & & & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \nonzeroEl & & & & & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ & & & & & & & & \nonzeroEl & \nonzeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ \ast & \ast & & & & & & & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \star & \star & \nonzeroEl & \nonzeroEl \\ \ast & \ast & & & & & & & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \star & \star & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & & & & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & & & & & & & & & & & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl \\ \nonzeroEl & \nonzeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \zeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl & \nonzeroEl }}. \tag{3.23}\] Furthermore, we denote by \(\vi P_0\) the matrix block marked by \(\ast\) sings and by \(\vi P_1\) the block of \(\star\)’s. Let \(\vi u(0) = \vi u^{0,0}\) and \(\vi u(1) = \vi u^{N,0}\), then the following holds \[ \jacobi \vi H(\vi X_c) \cdot \brackets{\set{\vi u^{j,k}}, 0, 0}\Tr = \vi 0 \implies \vi P_0 \vi u(0) + \vi P_1 \vi u(1) = \vi 0. \] A careful reader might notice that \(\vi P_0\) and \(\vi P_1\) belong to the linearized equation (3.16), i.e., by (3.8) the equality \[ \vi u(1) = - \vi P_1\Inv \vi P_0 \vi u(0) \tag{3.24}\] approximates \[ \vi u(1) = \vi \Gamma(1) \vi \Gamma(0)\Inv \vi u(0). \] Thus, using \(\vi \Gamma(0) = \ident_n\) from (3.8), \(- \vi P_1\Inv \vi P_0 \approx \vi \Gamma(1) = \vi M(T_0)\) for the monodromy matrix \(\vi M(T_0)\) corresponding to the cycle \(L_0\) of (1.4). Computing the eigenvalues of \(- \vi P_1\Inv \vi P_0\) then estimates the multipliers of the cycle \(L_0\), which determine its stability.

3.1.2 Semidynamical Systems

Now we shall turn our attention to semidynamical systems induced by DDEs. Throughout this section, we will assume the DDE of the form (1.24), \[ \dvi x(t) = \vi f(\vi x(t), \vi x(t - \tau_1), \dots, \vi x(t - \tau_H)), \] together with \(0 = \tau_0 < \tau_1 < \dots < \tau_H = \tau\). Although for the purposes of continuation, a dependence of \(\vi f\) on some parameter \(\alpha \in \R\) is assumed, here we consider \(\alpha\) fixed and omit it from the notation. Let us note this section is primarily based on [12] and [6].

Equilibrium Localization

The techniques of equilibrium localization in DDEs do not differ much from the ODE case. Indeed, assume we are trying to find a stable equilibrium. By time integration, see Sections 1.2.2 and 1.3.2, we can reach a point \(\vi x_t\) in a close neighborhood of the steady state solution. Then, we can apply the Newton-Raphson method, see Section 1.3.5, to the \(n\)-dimensional system \[ \vi f(\vi x^i, \dots, \vi x^i) = \vi 0, \tag{3.25}\] where we use \(\vi x^0 = \vi x_t(0)\) as the initial approximation. The iterations of the Newton-Raphson method \(\vi x^i\) converge to the steady state \(\vi x^*\) for \(\vi x^0\) sufficiently close to \(\vi x^*\). Naturally, this process can be performed with free parameter \(\alpha\) to obtain the continuation curve of the equilibrium by considering (3.25), along with a constraint given by the continuation method, see Section 3.2.

Suppose again we have found \(\vi x^*\) with the desired accuracy and that our goal is now to determine its stability. It can be shown that the stability of \(\vi x^*\) follows the stability of (the zero solutions of) the linearized equation, see (1.26), \[ \dvi x(t) = \sum_{i = 0}^H \jacobi_{\tau_i} \vi f^* \vi x(t - \tau_i), \tag{3.26}\] where \(\jacobi_{\tau_i} \vi f^* = \jacobi_{\tau_i} \vi f(\vi x^*, \dots, \vi x^*)\). Moreover, the linearized equation (3.26) is asymptotically stable if all roots \(\lmbd \in \C\) of the corresponding characteristic equation \[ \det\brackets{\lmbd \ident - \jacobi_{\tau_0} \vi f^* - \sum_{i = 1}^H \jacobi_{\tau_i} \vi f^*} = 0 \tag{3.27}\] lie in the open left half-plane, i.e., \(\ReOf{\lmbd} < 0\), see [13]. Unfortunately, the equation (3.27) has infinitely many roots. However, only finitely many roots have a real part larger than a given threshold. Thus, to study the stability of \(\vi x^*\) it suffices to calculate only stability-determining roots \(\lmbd \in \C\), which are roots satisfying \(\ReOf{\lmbd} \geq r\) for a given threshold \(r < 0\) close to zero.

Although analytical conditions for stability do exist (and are derived by techniques from complex analysis), we shall omit their theoretical derivation to maintain a focused and concise scope. Recently, there has been a development in numerical methods to approximate the stability-determining (i.e., right-most) characteristic roots of (3.27) using either a discretized solution operator \(\solOp(t)\) of (3.26), see Section 1.2.3.2 and Definition 1.27, or a discretized infinitesimal generator \(\infGen\) of the semigroup of the solution operator of (3.26), see Note 3.1.

The solution operator \(\solOp(t)\) has eigenvalues \(\mu(t)\), which are related to the roots of the characteristic equation by \(\mu(t) = e^{\lmbd t}\), see [12]. Clearly, to determine the stability of \(\vi x^*\), we need to look at its dominant eigenvalues. Moreover, it follows that for \(t\) large, the eigenvalues \(\mu(t)\) are well separated, which we can exploit in the computation. Below, we describe an algorithm to compute the dominant eigenvalues of \(\solOp(\diff t)\), where \(\diff t\) is the time-step of a linear multistep (LMS) method.

Note 3.1: Infinitesimal generator

In Section 1.2.3.2, we have discussed that \(\solOp(t)\) is a one-parameter strongly continuous semigroup, see [12]. One can thus define the infinitesimal generator \(\infGen\) of \(\set{\solOp(t)}\), see [14], by \[ \infGen \vi \phi = \lim_{t \to 0^+} \frac {\solOp(t)\vi \phi - \vi \phi} t. \] In particular, for the linearized equation (3.26), the infinitesimal generator becomes \[ \begin{gathered} \infGen \vi x(\tht) = \dvi x(\tht),\\ \vi x \in \domain \infGen \letDef \set{\vi x \in \contStateSpace \divider \dvi x \in \contStateSpace \; \& \; \dvi x(0) = \sum_{i = 0}^H \jacobi_{\tau_i} \vi f^* \vi x(-\tau_i)}. \end{gathered} \] Note that some algorithms for the computation of the stability of an equilibrium use a discretized infinitesimal generator instead of the solution operator.

When discretized, either of the discussed operators results in some matrix, for which we need to compute its eigenvalues. Therefore, it is our key concern that this matrix is small or its dominant eigenvalues can be computed efficiently by using an iterative scheme, such as subspace iteration.

A straightforward way to approximate the solution operator is to consider the matrix form of the numerical integration of the linearized equation. Such an approach has been developed by Engelborghs et al., see [15], [12], and [6], and relies on the linear multistep method, see Section 1.3.2 and Algorithm 1.3.

Let us discretize the delay interval \([-\tau, 0]\) (typically extended slightly to both the left and right, as we shall discuss below) by an equidistant mesh of constant step-length \(h\) to approximate the solution operator \(\solOp(h)\). Then, the solution \(\vi x\) can be represented by a finite discrete set of points \(\vi x^i \letDef \vi x(t_i)\) with \(t_i = ih\). A \(k\)-step LMS method for (3.26) with step-length \(h\) may be written as \[ \sum_{i = 0}^k \alpha_i \vi x^i = h \sum_{i = 0}^k \beta_i \cdot \brackets{\jacobi_{\tau_0} \vi f^* \vi x^i + \sum_{j = 1}^H \jacobi_{\tau_i} \vi f^* \tilde{\vi x}(t_i - \tau_j)}, \tag{3.28}\] where \(\alpha_i, \beta_i \in \R\) are parameters and \(\tilde{\vi x}(t_i - \tau_j)\) are computed by polynomial interpolation with \(s_- \in \N\) and \(s_+ \in \N\) points to the left and right, respectively, when \((t_i - \tau_j)\) does not “align” with \(t_k\) from the discretization. To be precise, DDE-BIFTOOL uses so-called Nordsieck interpolation, see [6].

Clearly, the discretization of the operator \(\solOp(h)\) itself is given by the linear map from \([\vi x^l, \dots, \vi x^{k-1}]\Tr\) to \([\vi x^{l+1}, \dots, \vi x^k]\Tr\), where \(l = - s_- - \ceil{\tau/h}\) and where the mapping is prescribed by (3.28). This procedure results in a \(N \times N\) matrix approximation of the operator \(\solOp(h)\), where \[ N = n(-l + k) \approx n\tau/h. \tag{3.29}\] Unfortunately, due to our choice of approximation and discretization of the solution operator \(\solOp(h)\) at the step-length \(h\), which is typically rather small, the eigenvalues \(\mu(h)\) of \(\solOp(h)\) are not well separated. Nonetheless, we can compute them using the QR method with computational cost \(O(N^3)\), from which the roots of the characteristic equation can be calculated. Moreover, there have been derived conditions on the step-length \(h\), such that all characteristic roots \(\lmbd\) with \(\ReOf{\lmbd} \geq r\) for some threshold \(r < 0\) are approximated sufficiently accurately, see [15].

Note 3.2: Runge-Kutta discretization

Let us note a discretization induced by the Runge-Kutta numerical integrator can also be used, albeit it is less frequent in practice. We refer the interested reader to [16].

Periodic Orbit Localization

The localization of periodic orbits in DDEs proceeds similarly to the ODE case, see Section 3.1.1.2. Indeed, the main differences lie in the discretization and normalization to unit period. Let us remark this subsection is based on [17], [6] and [12], but also [18], [19], and [20].

So far, our discretization scheme, prescribed by representation points, \[ 0 = s_0 = s_{0,0} < s_{0,1} < \dots < s_{0, m-1} < s_{0,m} = s_{1} = s_{1,0} < \dots < s_N = T, \] has been designed for a single period interval \([0,T]\) (or \([0,1]\) after period-1 normalization). As such, we can periodically extend it to cover the interval \([-\tau, T]\). There are now two possible, yet fundamentally different approaches.

The first approach, used in DDE-BIFTOOL, see [6] or [12], normalize the system in time to a unit period for the given cycle (analogously as we did for ODEs). Afterward, we can wrap around the delays into interval \([0,1]\), which yields the following BVP9 \[ \rcases{ \dvi u (\zeta_{j,i}) - T \vi f\brackets{\vi u(\zeta_{j,i}), \vi u \brackets{\brackets{\zeta_{j,i} - \frac {\tau_1} T} \Mod [0,1]}, \dots, \vi u \brackets{\brackets{\zeta_{j,i} - \frac {\tau_H} T} \Mod [0,1]}} &= \vi 0, \\ j = 0, 1, \dots, N-1, \; k = 1, \dots, m, \qquad & \\ \vi u(0) - \vi u(1) &= \vi 0, \\ \int_0^1 \scal {\vi u(\sigma)} {\dvi u_{\text{old}}(\sigma)} \dd \sigma &= 0, } \tag{3.30}\] where \(\zeta_{j,i}\) are again the collocation points, and \(\vi u_0\) and \(\vi u_1\) are solution segments on \([-\tau, 0]\) and \([1 - \linefrac{\tau} T, 1]\), respectively. The period \(T\) is an unknown variable. Unfortunately, the resulting linearized system for the BVP (3.30) has a banded structure, which cannot be easily exploited to aid the used linear solver. Moreover, the transformation via the modulo operator complicates the computation of Floquet multipliers, because it distorts the included matrix corresponding to the discretized time-integration operator.

The alternative method, see [17] and [12], relies on the construction of the orthogonal collocation BVP without subsequent wrapping of the delays. This produces a larger BVP, as collocation of the delayed segment is also required, \[ \rcasesAt{2}{ \dvi u (\zeta_{j,i}) - T \vi f\brackets{\vi u(\zeta_{j,i}), \vi u \brackets{\zeta_{j,i} - \frac {\tau_1} T}, \dots, \vi u \brackets{\zeta_{j,i} - \frac {\tau_H} T}} &= \vi H^{\text{dde}} (\vi u, T) &&= \vi 0, \\ j = 0, 1, \dots, N-1, \; k = 1, \dots, m, \qquad & & & \\ \vi u_0 - \vi u_1 &= \vi H^{\text{per}}(\vi u) &&= \vi 0, \\ \int_0^1 \scal {\vi u(\sigma)} {\dvi u_{\text{old}}(\sigma)} \dd \sigma &= \vi \Phi [\vi u] &&= 0 } = \vi H(\vi u, T) = \vi 0. \tag{3.31}\]

Note that the trajectory \(\vi u(s)\) of the periodic orbit for \(s \in [-\tau/T, 1]\) may be split into 2 parts, \(\vi u_0\) corresponding to the initial point, which captures the delay, and \(\vi u_t(s)\) for \(s \in [0, 1]\) representing the cycle itself. To solve BVP (3.31), one can again employ the Newton-Raphson method in its residual form, see Section 1.3.5, where its linearization may be written as \[ \rcases{ \jacobi_{\vi u_0} \vi H^{\text{dde}} \diff \vi u_0 + \jacobi_{\vi u_t} \vi H^{\text{dde}} \diff \vi u_t + \jacobi_{T} \vi H^{\text{dde}} \diff T &= - \vi H^{\text{dde}}, \\ \diff \vi u_0 - \diff \vi u_1 &= - \vi H^{\text{per}}, \\ \jacobi_{\vi u_0}\vi \Phi \diff \vi u_0 + \jacobi_{\vi u_t}\vi \Phi \diff \vi u_t + \jacobi_T \vi \Phi \diff T &= \vi \Phi. } \tag{3.32}\] All functions and their partial derivatives are assumed to be evaluated for a given \((\vi u, T)\), such that \(\diff \vi u_0, \diff \vi u_t, \diff T\) are considered to be correction variables for (3.32) in the context of the Newton-Raphson method (this notation was introduced in Section 1.3.5).

Tip 3.1: Similarities with ODEs

By omitting terms corresponding to partial derivatives with respect to \(\vi u_0\), we get a similar expression to the linearized BVP for the ODE case (3.11).

In [17], Verheyden and Lust proposed a method of condensation of parameters for (3.32), such that the BVP is reduced to the following form \[ \mtr{ \vi \Gamma(1) - \ident & \ast \\ \ast & \ast } \mtr{\diff \vi u_0 \\ \diff T} = - \vi R, \tag{3.33}\] where \(\vi \Gamma(1)\) is the monodromy matrix, see (3.8), which can be obtained from the time-integration matrix \(\vi \Gamma_t\) for variational equations10, \[ \vi \Gamma_t = - \brackets{\jacobi_{\vi u_t} \vi H^{\text{dde}}}\Inv \jacobi _{\vi u_0} \vi H^{\text{dde}}, \tag{3.34}\] similarly to the ODE case, see (3.24). The exact form of (3.33) can be found in [17]. The Newton-Picard method, see [21], [22] or [23], can be used to efficiently solve (3.33). Afterward, \(\diff \vi u_t\) may be obtained from the first equation of (3.32). Lastly, Floquet multipliers of the periodic orbit can be computed as the dominant eigenvalues of the discretized monodromy matrix \(\vi \Gamma(1)\).

3.2 Continuation

Let us now explore the continuation process itself (for references, see [1] and [24]). In general, a (finite-dimensional) continuation problem is the task of finding a curve in \(\R^{n+1}\) prescribed by system \[ \vi H(\vi X) = \vi 0, \; \vi H : \R^{n+1} \to \R^n. \tag{3.35}\] As an example, one might consider the equilibrium curve, which is given as the \(\alpha\)-parametrized solution of \(\vi f(\vi x, \alpha) = \vi 0\) for a (semi)dynamical system \[ \dvi x = \vi f(\vi x, \alpha). \] All of the continuation problems we shall be interested in arise from parametrized localization problems, see the previous section 3.1.

From the implicit function theorem follows, that the system (3.35) locally defines a smooth one-dimensional manifold (curve) \(\manifold\) passing through a point \(\vi X^0\), if \(\rank \Jacobi_{\vi H} = n\), where \(\Jacobi_{\vi H} = \jacobi \vi H(\vi X^0)\). By a numerical solution to (3.35) is understood a sequence of points \(\vi X^0, \vi X^1, \dots\) approximating the curve \(\manifold\) with the desired accuracy. The continuation algorithms we shall present here will be based on the prediction–correction process.

For the prediction part, we will solely use the tangent prediction \[ \tilde{\vi X}^i_{(0)} = \vi X^i + h \vi v^i, \tag{3.36}\] where \(\vi v^i \in \R^{n+1}\) is the normalized tangent vector to the curve \(\manifold\) at the (regular) point \(\vi X^i\) and \(h \in \R\) is the stepsize. Using the arclength parametrization of \(\vi X(s)\) of the curve near \(\vi X^i\) (i.e., \(\vi X(0) = \vi X^i\)), we can substitute it into (3.35) and differentiate with respect to \(s\), which yields \[ \jacobi \vi H(\vi X^i) \overbrace{\evaluateAt{\dvi X(s)}{s = 0}}^{\vi v^i} = \vi 0. \] In other words, \(\vi v^i\) must lie in the null space of the matrix of local linearization of \(\manifold\) around \(\vi X^i\). Moreover, since \(\vi X^i\) is presumed to be regular, i.e., \(\rank \jacobi \vi H(\vi X^i) = n\), then \(\vi v^i\) is determined uniquely up to a constant multiple.

Tip 3.2: Secant prediction

Although we have introduced only the tangent prediction, another popular method is the secant prediction. For more information, one can see [1].

Currently, we have predicted a point \(\tilde{\vi X}^i_{(0)}\) assumed to be close to the curve, which we need to “correct” such that it lies on \(\manifold\) within a certain specified degree of accuracy. This, in general, requires the use of a Newton-like method, see Section 1.3.5, as (3.35) might be highly nonlinear. However, Newton-like methods can only be applied to fully determined systems, i.e., the number of equations must correspond to the number of unknown variables. Therefore, we add an extra scalar (remember \(\vi H : \R^{n+1} \to \R^n\)) condition \[ g^i(\vi X) = 0 \] to (3.35), producing \[ \rcases{ \vi H(\vi X) &= \vi 0, \\ g^i(\vi X) &= 0. } \tag{3.37}\] Finally, Newton’s method can be used on (3.37).

One possibility is to select a hyper-plane \(P^i\) passing through the point \(\tilde{\vi X}^i_{(0)}\), such \(\vi v^i\) is its normal vector. Then, we can find the corrected point \(\vi X^{i+1}\) as the intersection of \(P^i\) and the curve \(\manifold\). Further, any point \(\vi X\) on \(P^i\) must satisfy \[ \overbrace{\scal {\vi X - \tilde{\vi X}^i_{(0)}} {\vi v^i}}^{g^i(\vi X)} = 0, \tag{3.38}\] which defines \(g^i : \R^{n+1} \to \R\). When combined with the tangent prediction, this method is called the pseudo-arclength continuation, see Figure 3.2 (a).

(a) Pseudo-arclength continuation
(b) Moore-Penrose continuation
Figure 3.2: Overview of different prediction–correction methods for continuation (adapted from [24]).

Another possible approach is to adapt \(g^i(\vi X)\) throughout the correction process, i.e., in the course of the Newton iterations. This produces a sequence of scalar continuation constraints \(\Seqnc{g^i_{(k)}} {k = 0} {N_i}\), with \(g^i_{(0)} \equiv g^i\), such that after the \(k\)-th iteration of the Newton’s method, producing \(\tilde{\vi X}^i_{(k)}\), we update the restricting hyper-plane to \(P^i_{(k)}\) defined as \[ g^i_{(k+1)}(\vi X) = \scal {\vi X - \tilde{\vi X}^i_{(k)}} {\tilde{\vi v}^i_{(k)}} = 0, \] where \(\tilde{\vi v}^i_{(k)} \neq \vi 0\) lies in the null-space of \(\jacobi \vi H(\tilde{\vi X}^i_{(k)})\), see (3.38). This method is commonly referred to as the Moore-Penrose continuation method, see Figure 3.2 (b) for illustration.

3.3 Cycle Continuation in Coupled Oscillators

As we have discussed in Section 2.1 on the problem formulation, we shall again consider the problem of computing phase shifts between 2 weakly coupled oscillators with possible delay in the coupling. Let us disclose this section is heavily based on the article by Záthurecký et al. [25].

For the soundness of the grid-based approach to the computation of phase shift between the oscillators of (2.1), we have simply used the continuous dependence on parameters and initial conditions. Unfortunately, this argument is insufficient for the cycle-continuation-based approach, as nothing a priori guarantees the (local) existence of a suitable (stable) cycle, which could be used for continuation at an arbitrary coupling strength \(\lmbd > 0\).

Notably, Záthurecký et al. [25] have shown that there must locally exist a cycle manifold \(\manifold\) (with respect to coupling strength and an arbitrary parameter affecting the frequency of one oscillator) near zero coupling between the two nearly identical oscillators. In particular, we shall focus primarily on the anti-phase synchronization, which may be used to explain the emergence of frequencies higher than physiological, see [26] and [27]. This will be performed mainly by studying the phase shifts between the two oscillators.

Recall the studied system (2.1), \[ \begin{aligned} \dvi x_1(t) &= \vi f(\vi x_1(t); \omega_1) + \lmbd \vi K(\vi x_1(t), \vi x_2(t - \tau), \sigma), \\ \dvi x_2(t) &= \vi f(\vi x_2(t); \omega_2) + \lmbd \vi K(\vi x_1(t - \tau), \vi x_2(t), \sigma), \end{aligned} \] describing \(\lmbd\)-weakly coupled oscillators \(\vi x_1(t), \vi x_2(t)\). In particular, we shall again concern ourselves with the case of two weakly coupled interneuron models, see (1.63).

This methodology now enables us to study Arnold tongues corresponding to different shifts. In particular, the continuation process computes the entire cross-section of manifold \(\manifold\) at a given \(\lmbd_i\). This can be represented as a curve \(\curve_{\lmbd_i}\), for instance, in 3-dimensional space \((\lmbd, C_2, T)\). Projecting parts of \(\curve_{\lmbd_i}\), where the corresponding cycle is stable, to the \((\lmbd, C_2)\) parameter plane yields a slice (with fixed \(\lmbd_i\)) of all Arnold tongues of \(\manifold\) as present in \(\curve_{\lmbd_i}\). This very fact is obscured by the simulation approach. Indeed, if two submanifolds of \(\manifold\) corresponding to stable underlying cycles overlap when projected onto the parameter space, i.e., they form 2 overlapping Arnold tongues, the simulation approach will highlight only the one in the basin of attraction of the cycle with a given shift (if both are stable).

Let us begin with \(\lmbd = 0\), i.e., decoupled nearly-identical oscillators. Then any shift may occur between them (as there is effectively no connection) and from the theoretical contributions of [25], we know that an in-phase and anti-phase synchrony must exist for cycles on \(\manifold\) close to \(0\)- and \(T/2\)-shifts for small enough coupling (i.e., the distortion from the coupling will not prevent phase synchronization). Therefore, we choose a fixed discretization \(\Lambda = \seqnc \lmbd i 0 {N_{\Lambda}}\) for \(\lmbd\).

For each \(\lmbd_i \in \Lambda\), we let the system converge to a stable cycle from a fixed initial condition \(\vi \phi\). After successful convergence, we then continue it with respect to \(C_2\), which yields a slice of the unknown cycle manifold \(\manifold\).

Note 3.3: Discretization of \(\lmbd\)

Note that currently, we have “blindly” discretized \(\lmbd\), and then for each element \(\lmbd_i \in \Lambda\) of the discretization, we have attempted to converge onto a stable periodic orbit. Only afterward could we compute the continuation with respect to \(C_2\). Another approach would be first to continue the cycle with respect to \(\lmbd\), producing a curve \(\curve_{\lmbd}\) in the desired manifold \(\manifold\). Then, we could perform the continuation with respect to \(C_2\) for each point of \(\curve_{\lmbd}\), which is already discretized due to the nature of numerical continuation.

Moreover, for each cycle on the \(C_2\)-continuation curve, we know its corresponding period \(T\). This allows us to use shift searching methods, see Section 2.2.2, with much better accuracy (compared to only estimated period). In particular, we partition the interval \([0,T]\), and for each of the disjunct subintervals \(Q_i \subseteq [0,T]\), we run the shift searching method. Finally, we select the shift with the least error.

In addition, when there is no delay in the coupling \(\vi K\) (i.e., the ODE case), one can also continue the LPC curves with respect to \(\lambda\) and \(C_2\). Note that these curves do not always originate from the zero coupling, as can be seen in Figures 3.3 and 3.4 (parts (C) and (D)). Unfortunately, DDE-BIFTOOL does not support the continuation of LPC curves. Nevertheless, we may attempt to estimate these curves from the boundaries of stable and unstable limit cycles on the manifold \(\manifold\).

Now, we can show the relation of \(\lambda\) and \(C_2\) to both the period \(T\) and the shift \(\beta\), see Figures 3.3 and 3.4 (parts (A) and (B)). Furthermore, we can project the cycle manifold onto the parameter plane \((C_2, \lmbd)\), where it draws out desired tongues. Let us remark tongues corresponding to different stable sub-manifolds of \(\manifold\) can overlap.

Figures 3.3 and 3.4 show this methodology applied to two weakly coupled interneuron models, see (1.63). The continuation was performed using MatCont, see [4] and [5], in the ODE case and DDE-BIFTOOL, see [6], in the DDE case. In particular, we calculated the optimal shift in the first coordinate of both oscillators over a \(3T\)-long trajectory. The region around zero-delay is numerically unstable for the continuation, see Section 3.1.2 — for very small delay (notice that the delay affects only the coupling term), the DDE system approaches the behavior of an ODE, resulting in a bad condition of the Jacobian matrix for the orthogonal collocation/continuation. For implementation details, see the Git repository for [25], potentially also the Git repository associated with this thesis for specifics of plotting computed data.

The runtimes necessary to generate data for Figures 3.3 and 3.4 (parts (A) and (B)) are approximately 43 seconds for the ODE case and 26 minutes for the DDE version, respectively. Note that although parts (C) and (D) of the same figures require a finer discretization \(\Lambda\), in practice one would probably not need to use it. Indeed, there is no inherent increase in error when choosing a sparser discretization, whereas, in the simulation-based approach, it would affect the entire result. As there is no dependence on results from other slices throughout the calculation, the parallelization is much easier and more efficient11. The required runtime in MATLAB, see [28], in the DDE case, is heavily dependent on the version of the dde23 solver, as discussed in Tip 1.2.

While there exists a Julia package DDEBifurcationKit.jl for continuation in DDEs, see [29], it currently does not support the computation of Floquet multipliers for periodic orbits, see this GitHub issue for more details. For completeness’ sake, we provide a reference single-threaded unoptimized implementation using this package as well, which can be found in the Git repository for [25]. The data generation process described above takes about 3 minutes for an ODE case (on 4 threads).

Figure 3.3: Numerical continuation of the cycle manifold \(\manifold\) of the system (1.63) without delay for \(C_1 = 1\) and free parameter \(C_2\) using MatCont is shown in (A); (B) plots period \(T\) with respect to \(\lmbd\) and \(C_2\). The colors of the continued section curves indicate stability (blue) or instability (red) of the limit cycles. The grey curves in panels (A) and (B) represent the projections of the black LPC curves onto \((C_2, \lmbd)\) delineating the Arnold tongues. The heatmaps of (C) the in-phase and (D) the anti-phase Arnold tongues illustrate the corresponding phase shift \(\beta\). Adapted from [25] (redrawn in Julia using data from the article).
Figure 3.4: Numerical continuation of the cycle manifold \(\manifold\) of the system (1.63) with delay \(\tau = 0.025\) for \(C_1 = 1\) and free parameter \(C_2\) using DDE-BIFTOOL is shown in (A); (B) plots period \(T\) with respect to \(\lmbd\) and \(C_2\). The colors of the continued section curves indicate stability (blue) or instability (red) of the limit cycles. The grey curves in panels (A) and (B) represent the projections of the black estimated LPC curves onto \((C_2, \lmbd)\) delineating the Arnold tongues. The heatmaps of (C) the in-phase and (D) the anti-phase Arnold tongues illustrate the corresponding phase shift \(\beta\). Adapted from [25] (redrawn in Julia using data from the article).

Finally, Figure 3.5 depicts the comparison of both approaches. Specifically, we plot the heatmaps of shifts \(\beta\) computed on MetaCentrum using the simulation-on-grid approach. Moreover, black curves render the boundaries of anti-phase Arnold tongues calculated with the bifurcation-based approach. In total, one can see that in general these two methods give comparable results. However, the simulation-based approach suffers from a rather high artifact rate, even though we used a very dense underlying grid.

(a) \(\tau = 0\)
(b) \(\tau = 0.025\)
Figure 3.5: Comparison of heatmaps of phase shifts \(\beta\) computed using the simulation-based approach, see Chapter 2 and Figure 2.8, and borders of anti-phase Arnold tongues obtained from the continuation of the cycle manifold \(\manifold\), see above.

  1. Note that this behavior is local and global properties are often more complicated, which is even more prevalent in higher dimensions.↩︎

  2. By (1.4) and the periodicity constraint, there exists a map \(\vi \phi(t) = \evolOp(t, \vi x(t))\) such that it maps the interval \([t_0, t_0 + T_0]\) onto the cycle \(L_0\) and is surely continuous by Definition 1.1 and Definition 1.14 for sufficiently smooth \(\vi f\) (otherwise \(\vi f\) would not give rise to a dynamical system in the first place). Moreover, \(\vi \phi(t)\) is injective on \([t_0, t_0 + T_0)\) as there can be no intersections. In total, \(L_0 = \vi \phi([t_0, t_0 + T_0])\) is a Jordan curve.↩︎

  3. In practice, it suffices to consider a sufficiently large connected region containing the cycle instead of the entire state space↩︎

  4. In other words, \(\vi \Gamma(s)\) is a fundamental matrix of (3.8). Moreover, this viewpoint explains (3.10) as a corollary of the method of variation of constants.↩︎

  5. Recall the algebraic multiplicity of an eigenvalue \(\lmbd\) of a matrix \(\vi A\) is its multiplicity as a root of the characteristic polynomial. Furthermore, geometric multiplicity of \(\lmbd\) is defined as \(\dim \ker (\vi A - \lmbd \ident)\). The algebraic multiplicity is always greater than or equal to the geometric multiplicity for any given eigenvalue \(\lmbd\) of \(\vi A\).↩︎

  6. Throughout the discussion of orthogonal collocation, we will use both collocation and representation points, which do not (in general) coincide.↩︎

  7. Gauss-Legendre quadrature is a method of approximating a definite integral \(\int_{-1}^1 f(x) \dd x \approx \sum_{i = 1}^n \omega_i f(x_i)\). The optimal value of \(x_i\) and \(\omega_i\) for a given value of \(n\) can be computed analytically. For more information, see [9]↩︎

  8. The last row of \(\jacobi \vi H\) represents the continuation equation, which we have not yet discussed (and therefore, we assume, it can, in theory, depend on all variables).↩︎

  9. The notation \(t \Mod [0,1]\) refers to \(t - \max{k \in \Z \divider k \leq t}\).↩︎

  10. The variational equations for a given cycle were introduced in Definition 1.21 and correspond to the linearization of the orthogonal collocation BVP.↩︎

  11. The implementation of the simulation-on-grid relies on using clever iteration order and multithreading locks, which results in delays when a thread is waiting to write to or read from the results matrices.↩︎