Neuroscience is among the richest applications of mathematics in the current day and age. In neuronal dynamics, synchronization phenomena occurring in networks of individual neurons are of particular importance. These complex patterns have been tied to the very fundamental capabilities of the brain, be it information processing, memory encoding, or the generation of rhythmic brain activity, see [1], [2].
Although it is likely safe to assume a certain level of robustness of brain functions regarding anomalies in the synchronization pattern, significant abnormalities in the rhythmic brain activity can be attributed to neurological disorders. The sheer variety of neurological disorders invalidates any sort of comprehensive study in a mere master’s thesis, thus we shall limit ourselves to focal epilepsy. Let us note that it is still a vast topic in both neurology and applied mathematics and we will only manage to scratch the surface.
From the perspective of neuronal dynamics, these abnormalities (in the case of focal epilepsy) manifest themselves as anomalous synchronization within the neuronal network and can lead to spontaneous seizures, see [3]. This type of synchronization is often closely linked to the presence of fast ripples (oscillations in the frequency range 250–600 Hz), very-fast ripples (600 - 1000 Hz), ultra-fast ripples (1000–2000 Hz), and ultra-fast oscillations (UFOs, above 2000 Hz) within depth electroencephalographic (EEG) recordings from patients with focal epilepsy. This possible connection between focal points of drug-resistant epilepsy and high-frequency EEG reading in their vicinity has been the cause of significant research interest due to the potential use as biomarkers for epileptogenic zones, see [4], [5], [6], [7], [8], [9], [10], and [11]. Let us note the exact mechanism governing these high-frequency oscillations in EEG signals remains unknown, when physiological limitations of action potential firing rates are taken into account, see [12].
Conducted research sheds light on the role of anti-phase synchronization as a possible mechanism for the emergence of HFOs and VFHOs in epileptic EEG signals, see [13], [14], and [15]. The applied part of this thesis is mainly based on [15] (the author of this thesis also contributed to the publication), although our focus will be the comparison between bifurcation- and simulation-based approaches to the same problem.
Let us thus consider two weakly coupled oscillators (e.g., neurons, clusters thereof, neural masses, etc.). For illustration, these will be two interneuron models connected by a (possibly delayed) gap-junctional coupling, see Section 1.4. By phase synchronization, we understand two (or more) cyclic signals featuring a consistent relationship between their phases. Note that the amplitudes or even frequencies may differ. Understanding of this phenomenon, often called phase locking in the context of bifurcation theory, can be aided by examining corresponding Arnold tongues, which are regions where a phase synchronization occurs, see [16], [17].
We shall be primarily interested in a system consisting of 2 weakly coupled oscillators with a possible delay in the coupling. In such systems, regions of phase-synchrony between the two oscillators may arise for certain values of the coupling strength and some parameter, which influences the oscillation frequency of one of the oscillators. Then, by the continuous dependence of the solution on parameters and initial values for both the ODEs, see [18] (Theorem 1.3), and DDEs, see [19] (Chapter 2.2), we can explore the phase-shift in the coupled system via small changes in both parameters.
In general, the system of interest takes the following DDE form, \[
\begin{aligned}
\dvi x_1(t) &= \vi f(\vi x_1(t); \omega_1) + \lmbd \vi K(\vi x_1(t), \vi x_2(t - \tau), \sigma), \\
\dvi x_2(t) &= \vi f(\vi x_2(t); \omega_2) + \lmbd \vi K(\vi x_1(t - \tau), \vi x_2(t), \sigma),
\end{aligned}
\tag{2.1}\] where \(\vi x_1(t), \vi x_2(t) \in \R^n\) are the state variables, \(\omega_1, \omega_2 \in \R\) parameters influencing oscillation frequencies of the respective models, \(\sigma\) is an arbitrary fixed phase parameter, and \(\lmbd \in \R^+_0\) the coupling strength. Lastly, \(\vi K\) represents the coupling function. Note that \(\vi x_i\) does not have to represent a single neuron (or model thereof), but even synchronized clusters of neurons or neural masses1.
2.2 Grid-Based Phase-Shift Computation
Throughout this section, we shall iteratively develop a framework for computation of the phase-shift on the anti-phase tongue for 2 weakly coupled oscillators, most often neurons, their synchronized clusters, or neural masses. Note that the algorithms and procedures described in this section had been formed and tested gradually throughout the research process and are the original work of the author. This should also explain the lack of references.
Thus consider the system (2.1) and choose \(\omega_1\) or \(\omega_2\) as the free parameter2. For illustration purposes, we shall use 2 interneuron models, see (1.63), with delayed gap-junctional coupling, see Section 1.4.3, e.g., \(\tau = 0.01\), and treat \(\omega_2\) (through \(C_2\) for (1.63)) as the free parameter. Moreover, assume bounds \(\omega_- \leq \omega_2 \leq \omega_+\) and corresponding discretization \[
\underbrace{\overbrace{\omega_2^0}^{\omega_-} < \omega_2^1 < \dots < \overbrace{\omega_2^{N_{\Omega}}}^{\omega_+}}_{\Omega},
\] with analogous bounds and discretization chosen for \(\lambda\) as well, producing set \(\Lambda\) of permissible values for \(\lmbd\). Both are most often chosen as equidistant partitions of intervals stemming from respective bounds. We shall denote \(\meshGrid = \Omega \times \Lambda\) the mesh of all possible combinations of \(\omega_2^i\) and \(\lmbd^i\) and let \[
\indicesOf{\meshGrid} = \set{\vi i = (i_1, i_2) \in \N_0^2 \divider 0 \leq i_1 \leq N_{\Omega}, 0 \leq i_2 \leq N_{\Lambda}}
\] be the set of all possible indices in \(\meshGrid\).
The most straightforward way to determine for which pairs \((\omega_2, \lmbd) \in \meshGrid\) the coupled system (2.1) features a stable3 anti-phase cycle is to solve the (2.1) forward in time from a fixed initial condition \(\vi \phi \in \contStateSpace\) on a fixed time interval \([0, t_{\mathrm{end}}]\). This is a reasonable methodology if we assume the system converges to the synchronized cycle and that we chose a long-enough transient, included in the interval \([0, t_{\mathrm{end}}]\), such that we end up in an acceptably small neighborhood around the limit cycle.
In case there is no delay in the coupling, i.e., \(\tau = 0\), then we can integrate the system forward in time, see Section 1.3.1. Similarly, for delayed coupling, one can employ the method of steps, which allows for an analogous forward-in-time integration, see Section 1.3.2. Either way, this results in 2 trajectories, both corresponding to evaluations at times \(\Seqnc {t^i} {i = 0} N\), namely \(\Seqnc {\vi x_1^i} {i = 0} N\) and \(\Seqnc {\vi x_2^i} {i = 0} N\), each for one oscillator. Moreover, denote by \[
\Seqnc {\vi \xi^i} {i = 1} N = \Seqnc {\mtr{\vi x_1^i \\ \vi x_2^i}} {i = 0} N
\] the joint trajectory. For simplicity, we shall assume that the time interval is divided equidistantly, i.e., there is a constant time-step \(\diff t\).
Now, we face 2 main challenges:
How to determine the period \(T\) of the coupled system (2.1)?
Knowing the period \(T\), how to calculate the phase-shift \(\beta\) between the oscillators?
Let us note the following numerical experiments and calculations were accomplished in Julia using a custom-made package GridWalker.jl.
2.2.1 Period Searching
To calculate the joint period \(T\) of (2.1) from \(\Seqnc {\vi \xi^i} {i = 0} N\), we propose two distinct methods. The first method relies on the fact that for capacitance-based models of neurons, there are certain notable components, e.g., the membrane potential, which should predominantly determine the overall behavior of the neuronal model. Moreover, we assume the periodic signal of this state variable behaves rather nicely, attaining a single (local) minimum and maximum on the repeating section of the minimal period. However, in practice, we distinguish two main types of neuronal oscillations (behaviors). Neuronal spiking, for which our assumptions hold, is a simple type of neuronal oscillation. It is complemented by a more complex bursting behavior, as is present in pyramidal cells, see [20]. In such a case, an envelope must be constructed, which then allows us to use our methodology on the envelope itself. Nevertheless, by choosing the \(j\)-th element of the state vector corresponding to the coupled model, we get a 1-dimensional time series \(\Seqnc {\xi^i_j} {i = 0} N\). One can then compute first differences \(\Seqnc {\diff \xi^i_j} {i = 1} N\), where \(\diff \xi^i_j \letDef \xi^i_j - \xi^{i - 1}_j\).
Clearly, (local) extrema of the time series \(\Seqnc {\xi^i_j} {i = 0} N\) occur when the differences \(\Seqnc {\diff \xi^i_j} {i = 1} N\) change sign4. In other words, we attempt to detect peaks (which should reflect the true period) as indices satisfying \[
i \text{ is a peak } \iff \diff \xi^i_j > 0 \; \land \; \diff \xi^{i+1}_j < 0.
\] Let us denote \(I^{\uparrow}_j\) the set of all peaks originating from the choice of notable index \(j\), then the period \(T\) can be approximated as the \(\diff t\) multiple of the mean difference\(\avg{\diff I^{\uparrow}_j}\) of the peak indices, i.e. \[
T \approx \estim T_{\diff} = \diff t \cdot \avg{\diff I^{\uparrow}_j}.
\] We shall call \(\estim T_{\diff}\) the differences period estimate. Note that while this method is rather inexpensive from the computational point of view, it depends on the appropriate choice of \(j\) and the necessary assumption that the period of \(\xi_j\) accurately reflects the period of the joint oscillation. For an example, see Figure 2.1.
Tip 2.1: On the length of the monotony of \(\xi^i_j\)
Currently, we require the \(\xi^i_j\) to be strictly increasing for only one step left of a peak and decreasing also for only one step right of the peak. This can be extended to require \(\xi^i_j\) to be increasing for \(k_+\) steps on the left and decreasing for \(k_-\) steps on the right of a peak. Such modification allows us to detect periods of much more complicated signals.
Code
usingLaTeXStrings, Statisticsfig =Figure(size=(450, 250))ax1 =Axis(fig[1, 1], ylabel=L"y(t)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(v))ax2 =Axis(fig[2, 1], ylabel=L"\Delta y(t)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(v))linkxaxes!(ax1, ax2)Δt =0.5t =0:Δt:13π;n =length(t);y = @. sin(t) +1/3*cos(2* t +π/3)ts = t[2:end]dy =diff(y)dy_up = dy .>0dy_down = dy .<0# Find where the monotony changes from up to downy_monotony = dy_up[2:end] .&& dy_down[1:end-1]# Compute mean distance between upsIⱼ =findall(identity, y_monotony)for i in Iⱼvlines!(ax1, ts[i]; linewidth=2, color=ones(n), colormap=:Zissou1)vlines!(ax2, ts[i]; linewidth=2, color=ones(n), colormap=:Zissou1)endscatterlines!(ax1, t, y, colormap=:Zissou1)scatterlines!(ax2, ts, dy, colormap=:Zissou1, color=[el >0 ? 1:2 for el in dy])ΔĪⱼ =mean(diff(Iⱼ))T̂ = ΔĪⱼ * Δtax2.xlabel =LaTeXString("\$\\hat{T}_{\\Delta} \\approx $T̂\$")fig
Figure 2.1: Approximation of the period of a test function \(y(t)= \sin(t) + \frac {\cos\brackets{2t + \frac {\pi} 3}} 3\) (with true period \(2\pi\)) by differences period estimation method.
The second approach one can use is less heuristic but potentially more computationally intensive. Naturally, the joint period \(T\) should minimize the discrepancy between \(t\) and \(t + T\) states, i.e.,5\[
T = \minOf{\argmin_{T > 0} \int_a^b \norm{\vi \xi(t) - \vi \xi(t + T)} \dd t}, \quad (a,b) \subset \R.
\tag{2.2}\]
As it stands, (2.2) is infeasible to solve computationally, but one can rather easily discretize it using \(\Seqnc {\vi \xi^i} {i = 0} N\) to obtain \[
\estim T_{\min, P} = \argmin_{T \in P} \overbrace{\sum_{i = 0}^{i_{\mathrm{end} - T}} \norm{\vi \xi^i - \vi \xi^{i + \floor{T/\diff t}}}}^{f_{\vi \xi}(T)},
\tag{2.3}\] where \(P = (p_-, p_+)\), with \(p_+ > p_- > 0\), is a chosen interval and \(i_{\mathrm{end} - T}\) denotes the maximum index \(i\) such that \(t^i \leq t_{\mathrm{end}} - T\). The discretized problem (2.3) can now be solved using discussed optimization methods for one-dimensional optimization, see Section 1.3.4, under the assumption of unimodality!6
Note that \(\estim T_{\min, P}\) is not guaranteed to approximate the minimal period. In practice, we often repeat solving (2.3) for multiple values of \(P_i\) and choose the estimate \(\estim T_{\min, P_i}\) with the least error. In particular, we found that using \(P_i = \brackets{p_-, \frac {p_+}{2^{i-1}}}\) gives satisfactory results. Moreover, for a short interval \(P\), the unimodality of the loss function corresponding to (2.3) is likely to be satisfied. In addition, Figure 2.2 illustrates the algorithm.
Code
usingOptim, LinearAlgebrafig =Figure(size=(450, 250))ax1 =Axis(fig[1, 1], ylabel=L"y(t)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(v))ax2 =Axis(fig[2, 1], ylabel=L"f_{\mathbf{\xi}}(T)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(round.(Int, v)))P = (1, 10)xlims!(ax2, P)Δt =0.5t =0:Δt:13π;n =length(t);y = @. sin(t) +1/3*cos(2* t +π/3)i_end(T) =findlast(tᵢ <= (t[end] - T) for tᵢ in t)T_ex =3.2f_ξ(T) =sum(norm(y[i] - y[i+floor(Int, T / Δt)]) for i in1:i_end(T))p =first(P):0.01:last(P)result = Optim.optimize(f_ξ, P..., Optim.GoldenSection())T̂ =round(result.minimizer, digits=2)ax2.xlabel =LaTeXString("\$\\hat{T}_{\\min, P} \\approx $T̂\$")scatterlines!(ax1, t[1:i_end(T_ex)], [y[i+floor(Int, T_ex / Δt)] for i in1:i_end(T_ex)], color=(:gray, 0.4), markersize=5)scatterlines!(ax1, t[1:i_end(T̂)], [y[i+floor(Int, T̂ / Δt)] for i in1:i_end(T̂)], color=ones(i_end(T̂)), colormap=:Zissou1, markersize=5)scatterlines!(ax1, t, y, colormap=:Zissou1, markersize=5)vlines!(ax2, T_ex, color=(:gray, 0.4))vlines!(ax2, T̂, color=[1], colormap=:Zissou1)lines!(ax2, p, f_ξ.(p); colormap=:Zissou1)scatter!(ax2, [T_ex], [f_ξ(T_ex)], color=:gray)scatter!(ax2, [T̂], [f_ξ(T̂)], color=[1], colormap=:Zissou1)fig
Figure 2.2: Approximation of the period of a test function \(y(t)= \sin(t) + \frac {\cos\brackets{2t + \frac {\pi} 3}} 3\) (with true period \(2\pi\)) by optimal period estimation method.
Tip 2.2: Modifications to the optimal period method
In (2.3), we have used the full state vector for discrepancy calculation, but similarly as with the computation of \(\estim T_{\diff}\), one can choose only a sub-vector as well. Another common modification is to calculate the dissimilarity in (2.3) after a certain transient has passed (such that we presume we have converged to the cycle after the transient), i.e., on the interval with indices from \(i_{\mathrm{start}}\) to \(i_{\mathrm{end} - T}\).
2.2.2 Shift Searching
After one has successfully estimated the period in the joint model, what remains is to approximate the shift between the two oscillators of (2.1). This can be achieved by solving a very similar problem (2.3), namely \[
\estim \beta_d = \argmin_{\beta \in Q} \sum_{i = i_{\mathrm{start}}}^{i_{\mathrm{end} - \beta}} d\brackets{\vi x_1^i, \vi x_2^{i + \floor{\beta/\diff t}}},
\tag{2.4}\] where \(Q = (q_-, q_+)\) is a chosen interval with \(0 \leq q_- < q_+ < \estim T\) and \(d\) is a metric. Commonly, metrics induced by \(\ltwo\)-norm \(\norm{\cdot}_2\) or maximum-norm \(\norm{\cdot}_{\infty}\) are used. Moreover, \(i_{\mathrm{start}}\) can again be used to offset the computation of shift \(\beta\) after a transient has passed.
Note that we shall primarily focus on calculating a shift \(\beta\) between periodic trajectories of a notable component \(j\) in each of the oscillators, i.e., \[
\estim \beta_{d,j} = \argmin_{\beta \in Q} \sum_{i = i_{\mathrm{start}}}^{i_{\mathrm{end} - \beta}} d\brackets{x_{1,j}^i, x_{2,j}^{i + \floor{\beta/\diff t}}}.
\tag{2.5}\] In total, (2.4) and (2.5) both lead to a one-dimensional optimization problem on a given interval and as such can be solved using methods proposed in Section 1.3.4. In particular, in the case of shift searching, the unimodality of the corresponding loss function seems to be satisfied more generally.
For a comparison of computed shifts in regards to period estimation based on differences and optimization, see Figures 2.3 and 2.4. There, it is apparent the central Arnold tongue has a rather sharp boundary at some places, which is desired from the theoretical point of view, see [17] and [15], as it is known that Arnold tongues are bordered by limit point of cycle curves, see Note 2.1. On the other hand, for both low and high values of \(\lambda\), our approximation of the tongue is either smoothed out (likely signaling that our choice of the initial condition was suboptimal) or scattered.
Note 2.1: Arnold tongue
In general, Arnold tongues are regions (in parameter space) of rational7 synchrony near a Neimark-Sacker bifurcation. For a continuous-time dynamical system, this, in essence, means that a given cycle switches stability and a torus with the opposite stability emerges around it (i.e., a stable cycle loses stability when it undergoes the supercritical Neimark-Sacker bifurcation while a stable torus appears around the now-unstable cycle). The dynamics on the torus then provide the two “directions of oscillation”, for which we can study its synchrony. Furthermore, two weakly coupled oscillators themselves are governed by dynamics on a torus, such that the influence of coupling strength \(\lmbd\) mimics the transition generically originating from the Neimark-Sacker bifurcation.
(a) Differences period estimation
(b) Optimal period estimation
Figure 2.3: Heatmap of shifts \(\beta\) between two \(\lmbd\)-weakly coupled interneuron models with respect to \(C_2\) without delay. Grid is determined as 75-point discretizations of intervals of interest for both \(C_2\) and \(\lambda\). At every point the same initial condition \(\vi \phi(\cdot) \equiv \vi u_0\) was used. The left figure presents the shift heatmap based on the differences period, whereas the right figure shows the optimal period-based shift. Trajectories are computed for 1500 ms, periods are computed on the last 20%, and shifts on the last 10% of their lengths. Note that the optimal period is calculated by halving the upper constraint up to 4 times. The shift is estimated based on the first state variable, i.e., \(V\), in both interneurons.
(a) Differences period estimation
(b) Optimal period estimation
Figure 2.4: Heatmap of shifts \(\beta\) between two \(\lmbd\)-weakly coupled interneuron models with respect to \(C_2\) with delay \(\tau = 0.05\). Note that except for the time integration (performed by the continuous ODE method, see Section 1.3.2), the same methods were used as in Figure 2.3.
2.2.3 Starters, Iterators, and Indexers
Starters
So far, we have utilized a rather naive approach of always using the same predetermined starting condition — we shall refer to this as a cold start8. Nevertheless, we have no guarantee such a choice was optimal for any grid point, much less for the entire grid.
A natural solution to this problem is to reuse trajectories we have already computed for some other combination of parameters. In general, it is going to be beneficial to use choose as an initial condition the endpoint of the trajectory corresponding to parameter values close to our current ones. Moreover, we want to consider only those nearby parameter values such that their trajectories lead to anti-phase synchronization. In total, the optimal reference index \(\starter(\vi i) \in \indicesOf{\meshGrid}\), from which to read the new initial condition, for the shift computation at \(\vi i\) (using \(\vi i \in \indicesOf{\meshGrid}\) to denote the combined index in both axis of the grid \(\meshGrid\)) is given by the following minimization problem, \[
\starter_{\indicesOf{\meshGrid}}(\vi i) = \argmin_{\vi j \in \indicesOf{\meshGrid}} \loss(\vi i, \vi j, \beta_{\vi j}),
\tag{2.6}\] where we use either the multiplicative loss, \[
\loss_{\mathrm{mult}}(\vi i, \vi j, \beta) = \norm{\vi i - \vi j} \cdot \brackets{\absval{\beta - \frac 1 2} + c},
\tag{2.7}\] where \(c > 0\) is an offset to the discrepancy in the shift from anti-phase, or the additive loss, \[
\loss_{\mathrm{add}}(\vi i, \vi j, \beta) = c_1 \norm{\vi i - \vi j} + c_2 \absval{\beta - \frac 1 2},
\tag{2.8}\] where \(c_1, c_2 > 0\) are chosen weights, see GridWalker.jl Git repository. Note that we will use additive loss (2.8) if not stated otherwise, as it seemed to perform generally better for our needs. Moreover, we deduced from our experimentation that it was better to prioritize minimizing the loss difference as opposed to distance9.
While this approach is ideal in theory, in practice we run into two main issues. Firstly, during the computation, some of the indices are not yet evaluated, thus we do not know their respective values of shifts \(\beta_{\vi i}\). This can be remedied by defining the set of all evaluated indices \(\IndicesOf{\meshGrid}{E}\) and restricting (2.6) to it, yielding \[
\starter_{\IndicesOf{\meshGrid}{E}}(\vi i) = \argmin_{\vi j \in \IndicesOf{\meshGrid}{E}} \loss(\vi i, \vi j, \beta_{\vi j}).
\tag{2.9}\]
Indexers
The second issue is present even in (2.9), and it is the dependence on the entire grid. Indeed, for distant indices the loss function, be it (2.7) or (2.8), will be large and irrelevant for the minimum. Hence, we can define an indexer function \[
\indexer : \indicesOf{\meshGrid} \to 2^{\indicesOf{\meshGrid}}
\tag{2.10}\] specifying a permissible neighborhood for every point of the grid.
Arnold tongues have a distinct shape, see [21], [17], or [15], which we can exploit to restrict the search space of a starter. Indeed, if we assume the Arnold tongue with the anti-phase synchrony is present approximately in the middle of our grid \(\meshGrid\), then elements closer to the center are more likely to live inside the tongue (and therefore be a good initial guess for anti-phase synchronized trajectories for nearby parameter values). Thus, choosing a starter only among evaluated indices on the diagonal from the current index to the center of \(\indicesOf{\meshGrid}\) up to some distance away, as can be seen in the following illustration, should not worsen our initial guess as compared to the full warm starter. \[
\begin{array}{cccccc}
& & & & & \text{center of } \indicesOf{\meshGrid} \\
& & & & \nearrow & \\
\circ_3 & \circ_3 & \circ_3 & \circ_3 & & \\
\circ_2 & \circ_2 & \circ_2 & \circ_3 & & \\
\circ_1 & \circ_1 & \circ_2 & \circ_3 & & \\
\bullet_{\vi i} & \circ_1 & \circ_2 & \circ_3 & &
\end{array}
\] We refer to the described method as the diagonal indexer of length \(l_{\indexer} \in \N\) (example shown with \(l_{\indexer} = 3\)). The combination of a warm starter with indexer, which we shall call an indexed starter, yields the minimization problem \[
\starter_{\indexer_E}(\vi i) = \argmin_{\vi j \in \IndicesOf{\meshGrid}{E} \cap \indexer(\vi i)} \loss(\vi i, \vi j, \beta_{\vi j}).
\tag{2.11}\] Depending on the choice of the indexer, an indexed starter might bring a very important computational benefit of constant complexity with respect to grid size. Indeed, by employing a fixed-length diagonal indexer, the indexed starter10\(\starter_{\indexer_E}(\vi i)\) requires comparison of at most \(l_{\indexer}^2 - 1\) values of \(\loss(\vi i, \vi j, \beta_{\vi j})\), where \(l_{\indexer} \ll \minOf{N_{\Omega}, N_{\Lambda}}\).
Iterators
So far, we have discussed how to choose initial conditions from endpoints of already computed trajectories and how to limit the number of candidate indices. The last undetermined process is the actual order of evaluation of indices. For cold starts, the order could be arbitrary, as it did not influence the rest of the algorithm. However, for warm (2.9) and indexed starters (2.11), we have an explicit dependence on already evaluated indices and their corresponding trajectories. In both cases, if no viable indices are found, the starter falls back to a fixed initial condition (i.e., to cold starter).
The “natural” order of iteration along the mesh is deduced from its representation in the Julia code as two separate ranges. Then iterating over their cartesian product, i.e., the grid \(\meshGrid\), amounts to subsequently evaluating each column of \(\meshGrid\), see Figure 2.5 (a). Notice that such evaluation order is suboptimal for the diagonal indexer, which will likely default to a cold start for indices on the left side of the grid.
(a) Line iterator
(b) Centered spiral iterator
(c) Shifted spiral iterator
Figure 2.5: Heatmaps indicating evaluation order (from black to white) for different kinds of iterators from GridWalker.jl.
From the perspective of stability of tongue detection, a more robust strategy is to start the evaluation order from the (presumed) center of the tongue and spirally iterate outwards. This way, an indexed starter can likely initialize from a parametrically nearby trajectory in anti-phase synchronization because suitable candidates are evaluated first, see Figure 2.5 (b) and Figure 2.5 (c).
All these concepts can now be combined. In Figure 2.6, one can see the same heatmaps corresponding to the shift between two weakly coupled interneurons as in Figures 2.3 and 2.4. In particular, an indexed starter \(\starter_{\indexer}\) was used with diagonal indexer of length \(l_{\indexer} = 10\) on a 75-by-75 grid \(\meshGrid\). An additive loss (2.8) was used for the starter in conjunction with a spiral iterator from the true center. The rest of the experiment setup follows Figure 2.3.
(a) \(\tau = 0\)
(b) \(\tau = 0.025\)
Figure 2.6: Heatmap of shifts \(\beta\) between two \(\lmbd\)-weakly coupled interneuron models with respect to \(C_2\) with and without delay. The heatmaps are computed by utilizing introduced techniques of starters, indexers, and iterators. The rest of the procedure follows Figure 2.3 and Figure 2.4.
Notice that in Figure 2.6 (a), the boundary of the tongue for higher coupling strengths is sharper than in Figure 2.3. Also, sub-tongues present in all previous results now appear thinner and more distorted.
2.2.4 Periodicity Checkers
In Figures 2.3 to 2.6, we have always computed a trajectory for each parameter combination belonging to a point on the grid \(\meshGrid\) and then attempted to find the corresponding period and shift. This relied on the assumption that the (joint) trajectory is always periodic, and thus all necessary quantities are well-defined and can be computed. In case the assumption does not hold, we will still get an output, but potentially a nonsensical one.
In other words, one should check the periodicity of the trajectory for each parameter combination. If periodicity is not confirmed, we can skip the rest of the algorithm, saving valuable computing time.
There are several ways one might go about determining periodicity. One possibility is to consider a band around a reference value with respect to \(\vi x_{1,i}\), the \(i\)-th coordinate of the state corresponding to the first oscillator — local maxima work especially well for this approach. Then we can select indices of all data points lying inside this band and study the distribution of \(i\)-th coordinate of trajectory points for the 2nd oscillator, i.e., \(\vi x_{2,i}\), at these indices. However, this method is very sensitive to the width of the band and the reference value, rendering it almost unusable in practice.
An alternative approach is to detect peaks of \(\vi x_{1,i}\), the \(i\)-th coordinate of the trajectory of the 1st oscillator. This again prescribes an index set \(I\), which we can apply to \(\vi x^I_{2,i}\). Then, we can check whether \(\vi x^I_{2,i}\) follows chosen criteria, e.g., stay within a predefined range. If this check is successful, we assume the trajectory as a whole is periodic and execute the rest of the procedure, see Figure 2.7.
Note 2.2: Peak-based periodicity checker
A careful reader might notice the peak-based periodicity checker closely resembles differences period searcher, see Section 2.2.1. Indeed, it should be possible to merge these two concepts into a single algorithm. Nevertheless, we decided against it for code-readability reasons. Moreover, the computational complexity of the peak-based periodicity checker is almost negligible when considering the entire run of the algorithm over the whole grid, even more so when taking into account the time saved on shift searching for skipped indices \(\vi i \in \indicesOf{\meshGrid}\).
(a) \(\tau = 0\)
(b) \(\tau = 0.025\)
Figure 2.7: Heatmap of shifts \(\beta\) between two \(\lmbd\)-weakly coupled interneuron models with respect to \(C_2\) with and without delay, such that for each parameter combination a periodicity is tested before continuing with the computation. The rest of the computation follows Figure 2.6.
Sub-tongues
Throughout this chapter, we have discussed and implemented various measures to ensure optimal estimation of the true shift between the oscillators. However, none of our modifications have removed the sub-tongues, which can be seen in every shift heatmap. By examining the trajectories at their corresponding points, one can verify both the periodicity and the computed shift. Their emergence is likely due to our choice of the methodology. In particular, they are probably caused by either insufficiently long transient (i.e., the convergence to the attractor is very slow and the oscillators remain near the initial anti-phase), the discretization of the parameter-space to the grid, or even possibly the choice of norms/distances for period and shift searching. Either way, these phenomena could produce a similar fractal structure.
However it may be, this is the problem of the grid simulation, but also in a way its strength. Indeed, from what we have computed, we cannot guarantee these sub-tongue regions of anti-phase synchronization actually exist and are not just artifacts from our calculation. Nevertheless, the grid simulation along with a suitable choice of iterator might tell us about the behavior of the studied system in nature, because it reflects a continuous change of parameters, which might not always have enough time to converge to the real attractor. Note that the spiral iterator is an especially good choice, as it radially propagates the parametric change, avoiding sudden jumps and discontinuities.
2.2.5 Multi-threading and Grid Computing
The Figures 2.3 - 2.7 presented so far have all been computed on a Thinkpad T490s with Intel Core i7-8565U CPU and 16 GiB of RAM. We have utilized at most 6 threads and the computations took at most approximately 3 minutes. However, if we were to increase the grid density, the computation time would increase quadratically. Indeed, the calculation of a 1200-by-1200 grid would take at least approximately 768 minutes (considering 16 GiB of RAM would be enough, and thus swapping would not be needed, which could cause a significant slow-down).
Note that multi-threading is not trivial to implement for our algorithm. A wrong execution order might entirely defeat the purpose of the chosen iterator. Therefore, we must force each thread to evaluate correct indices, e.g., \(l\)-th thread might be restricted to \(\Seqnc {\vi i_{l(k+1) - 1}} {k = 0} {N_l}\) (for appropriate \(N_l < N_{\meshGrid}\)).
In general, scientific computing is often too demanding for a personal computer. One possible solution is to offload the computation to grid infrastructure, namely MetaCentrum.
MetaCentrum is the leading national grid computing infrastructure in the Czech Republic, providing researchers, scientists, and students with advanced computing resources to tackle complex computational challenges.
For illustration purposes, we ran the script generating Figure 2.7 for a 1200-by-1200 grid instead of 75-by-75, see Figure 2.8. As we have said, on a regular consumer-grade laptop, such computation would take at the very least 750 minutes simply extrapolating from the 75-by-75 grid computation. On MetaCentrum, we reserved 20 CPU cores, 36 GiB of RAM, and 150 GiB scratch storage with a maximal runtime of 16 hours. Using this setup, the computation took 82 minutes for the experiment without delays and 443 minutes with delay.
(a) \(\tau = 0\)
(b) \(\tau = 0.025\)
Figure 2.8: Heatmap of shifts \(\beta\) between two \(\lmbd\)-weakly coupled interneuron models with respect to \(C_2\) with and without delay computed on MetaCentrum grid. Save for grid density, the setup of the experiment mimics Figure 2.7.
Caution 2.1: Stochastic simulations
So far, we have only used deterministic simulations for every computation of the shifts heatmap. In theory, one could use the Euler-Maruyama method, see Algorithm 1.4, to generate a stochastic realization of the heatmap. Then, using a sort of Monte Carlo approach, we could repeat the stochastic grid generation to obtain the distribution of shift for each point of the discretized grid. However, such modification would drastically increase the computational time simply due to the repeated computations, but also because of the less “efficient” Euler-Maruyama solver (as compared to Runge-Kutta methods or similar).
Jacobs, J., Staba, R., Asano, E., Otsubo, H., Wu, J. Y., Zijlmans, M., Mohamed, I., Kahane, P., Dubeau, F., Navarro, V. and Gotman, J. (2012). High-frequency oscillations (HFOs) in clinical epilepsy. Progress in Neurobiology98 302–15.
Ševčík, J. and Přibylová, L. (2025). Cycle multistability and synchronization mechanisms in coupled interneurons: In-phase and anti-phase dynamics under current stimuli. accepted for publication in Applied Mathematics and Computation.
Neural mass models approximate the behavior of large clusters of neurons based on statistically reasoned simplifications.↩︎
This choice is without loss of generality in case \(\vi K(\cdot, \cdot)\) is symmetric.↩︎
Note that this method is effectively unable to compute unstable cycle manifolds due to the methodology itself.↩︎
The order determines whether its a local minimum or maximum.↩︎
We are interested in the minimal period, thus we add \(\min\) to select the least of all permissible values in (2.2).↩︎
Without unimodality, these methods converge to a local minimum near the starting point.↩︎
By rational synchrony we mean phase-locked synchronization in the form \(p:q\), \(p,q \in \N\) indivisible, i.e., one period consists of \(p\) revolutions around the first axis and \(q\) revolutions around the second. For two weakly coupled oscillators, this translates to a period of the full system being completed after \(p\) periods of the first oscillators and \(q\) of the second one.↩︎
As will be evident later, by using an indexer we limit possible discrepancy in indices (i.e., the parametric distance). This effectively supercharges the effect of \(c_1\) in (2.8), as it is technically included in the construction of the indexer (thus forcing us to prioritize shift discrepancy).↩︎
The naive warm starter has a time complexity \(O(N_{\Omega} \cdot N_{\Lambda})\) when considering the computation of shift to have constant complexity for all indices \(\vi i\).↩︎
---engine: julia---:::{.content-visible when-format="pdf"}\HeaderNumbered::::::{.hidden}{{< include mathematics.qmd >}}```{julia}#| echo: false#| output: falseusingPkgPkg.activate(".")```::::::{.content-hidden unless-format="html"}```{julia}#| echo: false#| output: falseusingWGLMakieWGLMakie.activate!()```::::::{.content-hidden when-format="html"}```{julia}#| echo: false#| output: falseusingCairoMakieCairoMakie.activate!(type="svg")```:::# Grid Simulation Approach {#sec-simulation-approach}## Problem Formulation {#sec-problem-formulation}Neuroscience is among the richest applications of mathematics in the current day and age. In neuronal dynamics, synchronization phenomena occurring in networks of individual neurons are of particular importance. These complex patterns have been tied to the very fundamental capabilities of the brain, be it information processing, memory encoding, or the generation of rhythmic brain activity, see @Izhikevich2006, @Song2018. Although it is likely safe to assume a certain level of robustness of brain functions regarding anomalies in the synchronization pattern, significant abnormalities in the rhythmic brain activity can be attributed to neurological disorders. The sheer variety of neurological disorders invalidates any sort of comprehensive study in a mere master's thesis, thus we shall limit ourselves to focal epilepsy. Let us note that it is still a vast topic in both neurology and applied mathematics and we will only manage to scratch the surface.From the perspective of neuronal dynamics, these abnormalities (in the case of focal epilepsy) manifest themselves as anomalous synchronization within the neuronal network and can lead to spontaneous seizures, see @Jiruska2013. This type of synchronization is often closely linked to the presence of fast ripples (oscillations in the frequency range 250--600 Hz), very-fast ripples (600 - 1000 Hz), ultra-fast ripples (1000--2000 Hz), and ultra-fast oscillations (UFOs, above 2000 Hz) within depth electroencephalographic (EEG) recordings from patients with focal epilepsy. This possible connection between focal points of drug-resistant epilepsy and high-frequency EEG reading in their vicinity has been the cause of significant research interest due to the potential use as biomarkers for epileptogenic zones, see @Jacobs2008, @Jacobs2012, @Worrell2011, @Staba2011, @Brazdil2017, @Cimbalnik2018, @Cimbalnik2020, and @Brazdil2023. Let us note the exact mechanism governing these high-frequency oscillations in EEG signals remains unknown, when physiological limitations of action potential firing rates are taken into account, see @Jiruska2017.Conducted research sheds light on the role of anti-phase synchronization as a possible mechanism for the emergence of HFOs and VFHOs in epileptic EEG signals, see @Pribylova2024, @Sevcik2025, and @Zathurecky2025. The applied part of this thesis is mainly based on @Zathurecky2025 (the author of this thesis also contributed to the publication), although our focus will be the comparison between bifurcation- and simulation-based approaches to the same problem.Let us thus consider two weakly coupled oscillators (e.g., neurons, clusters thereof, neural masses, etc.). For illustration, these will be two interneuron models connected by a (possibly delayed) gap-junctional coupling, see @sec-neuronal-dynamics. By phase synchronization, we understand two (or more) cyclic signals featuring a consistent relationship between their phases. Note that the amplitudes or even frequencies may differ. Understanding of this phenomenon, often called *phase locking* in the context of bifurcation theory, can be aided by examining corresponding *Arnold tongues*, which are regions where a phase synchronization occurs, see @Wiggins2003, @Kuznetsov2023. We shall be primarily interested in a system consisting of 2 weakly coupled oscillators with a possible delay in the coupling. In such systems, regions of phase-synchrony between the two oscillators may arise for certain values of the coupling strength and some parameter, which influences the oscillation frequency of one of the oscillators. Then, by the continuous dependence of the solution on parameters and initial values for both the ODEs, see @Chicone2006 (Theorem 1.3), and DDEs, see @Hale1993 (Chapter 2.2), we can explore the phase-shift in the coupled system via small changes in both parameters.In general, the system of interest takes the following DDE form,$$\begin{aligned} \dvi x_1(t) &= \vi f(\vi x_1(t); \omega_1) + \lmbd \vi K(\vi x_1(t), \vi x_2(t - \tau), \sigma), \\ \dvi x_2(t) &= \vi f(\vi x_2(t); \omega_2) + \lmbd \vi K(\vi x_1(t - \tau), \vi x_2(t), \sigma),\end{aligned}$$ {#eq-general-delayed-coupled-oscillators}where $\vi x_1(t), \vi x_2(t) \in \R^n$ are the state variables, $\omega_1, \omega_2 \in \R$ parameters influencing oscillation frequencies of the respective models, $\sigma$ is an arbitrary fixed phase parameter, and $\lmbd \in \R^+_0$ the coupling strength. Lastly, $\vi K$ represents the coupling function. Note that $\vi x_i$ does not have to represent a single neuron (or model thereof), but even synchronized clusters of neurons or neural masses^[Neural mass models approximate the behavior of large clusters of neurons based on statistically reasoned simplifications.].## Grid-Based Phase-Shift ComputationThroughout this section, we shall iteratively develop a framework for computation of the *phase-shift* on the anti-phase tongue for 2 weakly coupled oscillators, most often neurons, their synchronized clusters, or neural masses. Note that the algorithms and procedures described in this section had been formed and tested gradually throughout the research process and are the original work of the author. This should also explain the lack of references.Thus consider the system ([-@eq-general-delayed-coupled-oscillators]) and choose $\omega_1$ or $\omega_2$ as the free parameter^[This choice is without loss of generality in case $\vi K(\cdot, \cdot)$ is symmetric.]. For illustration purposes, we shall use 2 interneuron models, see ([-@eq-coupled-interneurons]), with delayed gap-junctional coupling, see @sec-neuronal-couplings, e.g., $\tau = 0.01$, and treat $\omega_2$ (through $C_2$ for ([-@eq-coupled-interneurons])) as the free parameter. Moreover, assume bounds $\omega_- \leq \omega_2 \leq \omega_+$ and corresponding discretization$$\underbrace{\overbrace{\omega_2^0}^{\omega_-} < \omega_2^1 < \dots < \overbrace{\omega_2^{N_{\Omega}}}^{\omega_+}}_{\Omega},$$with analogous bounds and discretization chosen for $\lambda$ as well, producing set $\Lambda$ of permissible values for $\lmbd$. Both are most often chosen as equidistant partitions of intervals stemming from respective bounds. We shall denote $\meshGrid = \Omega \times \Lambda$ the mesh of all possible combinations of $\omega_2^i$ and $\lmbd^i$ and let $$\indicesOf{\meshGrid} = \set{\vi i = (i_1, i_2) \in \N_0^2 \divider 0 \leq i_1 \leq N_{\Omega}, 0 \leq i_2 \leq N_{\Lambda}}$$be the set of all possible indices in $\meshGrid$.The most straightforward way to determine for which pairs $(\omega_2, \lmbd) \in \meshGrid$ the coupled system ([-@eq-general-delayed-coupled-oscillators]) features a *stable*^[Note that this method is effectively unable to compute unstable cycle manifolds due to the methodology itself.] anti-phase cycle is to solve the ([-@eq-general-delayed-coupled-oscillators]) forward in time from a fixed initial condition $\vi \phi \in \contStateSpace$ on a fixed time interval $[0, t_{\mathrm{end}}]$. This is a reasonable methodology if we assume the system converges to the synchronized cycle and that we chose a long-enough transient, included in the interval $[0, t_{\mathrm{end}}]$, such that we end up in an acceptably small neighborhood around the limit cycle.In case there is no delay in the coupling, i.e., $\tau = 0$, then we can integrate the system forward in time, see @sec-integrating-odes. Similarly, for delayed coupling, one can employ the method of steps, which allows for an analogous forward-in-time integration, see @sec-integrating-ddes. Either way, this results in 2 trajectories, both corresponding to evaluations at times $\Seqnc {t^i} {i = 0} N$, namely $\Seqnc {\vi x_1^i} {i = 0} N$ and $\Seqnc {\vi x_2^i} {i = 0} N$, each for one oscillator. Moreover, denote by$$\Seqnc {\vi \xi^i} {i = 1} N = \Seqnc {\mtr{\vi x_1^i \\ \vi x_2^i}} {i = 0} N$$the joint trajectory. For simplicity, we shall assume that the time interval is divided equidistantly, i.e., there is a constant time-step $\diff t$.Now, we face 2 main challenges:1. How to determine the period $T$ of the coupled system ([-@eq-general-delayed-coupled-oscillators])?2. Knowing the period $T$, how to calculate the phase-shift $\beta$ between the oscillators?Let us note the following numerical experiments and calculations were accomplished in Julia using a custom-made package [`GridWalker.jl`](https://gitlab.com/sceptri-university/gamu/gridwalker.jl).### Period Searching {#sec-period-searching}To calculate the joint period $T$ of ([-@eq-general-delayed-coupled-oscillators]) from $\Seqnc {\vi \xi^i} {i = 0} N$, we propose two distinct methods. The first method relies on the fact that for capacitance-based models of neurons, there are certain notable components, e.g., the membrane potential, which should predominantly determine the overall behavior of the neuronal model. Moreover, we assume the periodic signal of this state variable behaves rather nicely, attaining a single (local) minimum and maximum on the repeating section of the minimal period. However, in practice, we distinguish two main types of neuronal oscillations (behaviors). Neuronal spiking, for which our assumptions hold, is a simple type of neuronal oscillation. It is complemented by a more complex bursting behavior, as is present in pyramidal cells, see @Pribylova2025. In such a case, an envelope must be constructed, which then allows us to use our methodology on the envelope itself. Nevertheless, by choosing the $j$-th element of the state vector corresponding to the coupled model, we get a 1-dimensional time series $\Seqnc {\xi^i_j} {i = 0} N$. One can then compute first differences $\Seqnc {\diff \xi^i_j} {i = 1} N$, where $\diff \xi^i_j \letDef \xi^i_j - \xi^{i - 1}_j$.Clearly, (local) extrema of the time series $\Seqnc {\xi^i_j} {i = 0} N$ occur when the differences $\Seqnc {\diff \xi^i_j} {i = 1} N$ change sign^[The order determines whether its a local minimum or maximum.]. In other words, we attempt to detect peaks (which should reflect the true period) as indices satisfying$$i \text{ is a peak } \iff \diff \xi^i_j > 0 \; \land \; \diff \xi^{i+1}_j < 0.$$Let us denote $I^{\uparrow}_j$ the set of all peaks originating from the choice of notable index $j$, then the period $T$ can be approximated as the $\diff t$ multiple of the *mean difference* $\avg{\diff I^{\uparrow}_j}$ of the peak indices, i.e.$$T \approx \estim T_{\diff} = \diff t \cdot \avg{\diff I^{\uparrow}_j}.$$We shall call $\estim T_{\diff}$ the *differences period estimate*. Note that while this method is rather inexpensive from the computational point of view, it depends on the appropriate choice of $j$ and the necessary assumption that the period of $\xi_j$ accurately reflects the period of the joint oscillation. For an example, see @fig-diff-period.::: {.callout-tip #tip-monotony-extension}##### On the length of the monotony of $\xi^i_j$Currently, we require the $\xi^i_j$ to be strictly increasing for only one step left of a peak and decreasing also for only one step right of the peak. This can be extended to require $\xi^i_j$ to be increasing for $k_+$ steps on the left and decreasing for $k_-$ steps on the right of a peak. Such modification allows us to detect periods of much more complicated signals.:::```{julia}#| label: fig-diff-period#| fig-cap: Approximation of the period of a test function $y(t)= \sin(t) + \frac {\cos\brackets{2t + \frac {\pi} 3}} 3$ (with true period $2\pi$) by *differences period* estimation method.#| width: 70%usingLaTeXStrings, Statisticsfig =Figure(size=(450, 250))ax1 =Axis(fig[1, 1], ylabel=L"y(t)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(v))ax2 =Axis(fig[2, 1], ylabel=L"\Delta y(t)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(v))linkxaxes!(ax1, ax2)Δt =0.5t =0:Δt:13π;n =length(t);y = @. sin(t) +1/3*cos(2* t +π/3)ts = t[2:end]dy =diff(y)dy_up = dy .>0dy_down = dy .<0# Find where the monotony changes from up to downy_monotony = dy_up[2:end] .&& dy_down[1:end-1]# Compute mean distance between upsIⱼ =findall(identity, y_monotony)for i in Iⱼvlines!(ax1, ts[i]; linewidth=2, color=ones(n), colormap=:Zissou1)vlines!(ax2, ts[i]; linewidth=2, color=ones(n), colormap=:Zissou1)endscatterlines!(ax1, t, y, colormap=:Zissou1)scatterlines!(ax2, ts, dy, colormap=:Zissou1, color=[el >0 ? 1:2 for el in dy])ΔĪⱼ =mean(diff(Iⱼ))T̂ = ΔĪⱼ * Δtax2.xlabel =LaTeXString("\$\\hat{T}_{\\Delta} \\approx $T̂\$")fig```The second approach one can use is less heuristic but potentially more computationally intensive. Naturally, the joint period $T$ should minimize the discrepancy between $t$ and $t + T$ states, i.e.,^[We are interested in the *minimal period*, thus we add $\min$ to select the least of all permissible values in ([-@eq-period-minimization-problem]).]$$T = \minOf{\argmin_{T > 0} \int_a^b \norm{\vi \xi(t) - \vi \xi(t + T)} \dd t}, \quad (a,b) \subset \R.$$ {#eq-period-minimization-problem}As it stands, ([-@eq-period-minimization-problem]) is infeasible to solve computationally, but one can rather easily discretize it using $\Seqnc {\vi \xi^i} {i = 0} N$ to obtain$$\estim T_{\min, P} = \argmin_{T \in P} \overbrace{\sum_{i = 0}^{i_{\mathrm{end} - T}} \norm{\vi \xi^i - \vi \xi^{i + \floor{T/\diff t}}}}^{f_{\vi \xi}(T)},$$ {#eq-discretized-period-minimization}where $P = (p_-, p_+)$, with $p_+ > p_- > 0$, is a chosen interval and $i_{\mathrm{end} - T}$ denotes the maximum index $i$ such that $t^i \leq t_{\mathrm{end}} - T$. The discretized problem ([-@eq-discretized-period-minimization]) can now be solved using discussed optimization methods for one-dimensional optimization, see @sec-1d-optimization, under the assumption of *unimodality*!^[Without unimodality, these methods converge to a local minimum near the starting point.]Note that $\estim T_{\min, P}$ is *not* guaranteed to approximate the minimal period. In practice, we often repeat solving ([-@eq-discretized-period-minimization]) for multiple values of $P_i$ and choose the estimate $\estim T_{\min, P_i}$ with the least error. In particular, we found that using $P_i = \brackets{p_-, \frac {p_+}{2^{i-1}}}$ gives satisfactory results. Moreover, for a short interval $P$, the unimodality of the loss function corresponding to ([-@eq-discretized-period-minimization]) is likely to be satisfied. In addition, @fig-optim-period illustrates the algorithm.```{julia}#| label: fig-optim-period#| fig-cap: Approximation of the period of a test function $y(t)= \sin(t) + \frac {\cos\brackets{2t + \frac {\pi} 3}} 3$ (with true period $2\pi$) by *optimal period* estimation method.#| width: 70%usingOptim, LinearAlgebrafig =Figure(size=(450, 250))ax1 =Axis(fig[1, 1], ylabel=L"y(t)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(v))ax2 =Axis(fig[2, 1], ylabel=L"f_{\mathbf{\xi}}(T)", xtickformat=v ->latexstring.(v), ytickformat=v ->latexstring.(round.(Int, v)))P = (1, 10)xlims!(ax2, P)Δt =0.5t =0:Δt:13π;n =length(t);y = @. sin(t) +1/3*cos(2* t +π/3)i_end(T) =findlast(tᵢ <= (t[end] - T) for tᵢ in t)T_ex =3.2f_ξ(T) =sum(norm(y[i] - y[i+floor(Int, T / Δt)]) for i in1:i_end(T))p =first(P):0.01:last(P)result = Optim.optimize(f_ξ, P..., Optim.GoldenSection())T̂ =round(result.minimizer, digits=2)ax2.xlabel =LaTeXString("\$\\hat{T}_{\\min, P} \\approx $T̂\$")scatterlines!(ax1, t[1:i_end(T_ex)], [y[i+floor(Int, T_ex / Δt)] for i in1:i_end(T_ex)], color=(:gray, 0.4), markersize=5)scatterlines!(ax1, t[1:i_end(T̂)], [y[i+floor(Int, T̂ / Δt)] for i in1:i_end(T̂)], color=ones(i_end(T̂)), colormap=:Zissou1, markersize=5)scatterlines!(ax1, t, y, colormap=:Zissou1, markersize=5)vlines!(ax2, T_ex, color=(:gray, 0.4))vlines!(ax2, T̂, color=[1], colormap=:Zissou1)lines!(ax2, p, f_ξ.(p); colormap=:Zissou1)scatter!(ax2, [T_ex], [f_ξ(T_ex)], color=:gray)scatter!(ax2, [T̂], [f_ξ(T̂)], color=[1], colormap=:Zissou1)fig```::: {.callout-tip #tip-optimal-period-modifications}##### Modifications to the optimal period methodIn ([-@eq-discretized-period-minimization]), we have used the full state vector for discrepancy calculation, but similarly as with the computation of $\estim T_{\diff}$, one can choose only a sub-vector as well. Another common modification is to calculate the dissimilarity in ([-@eq-discretized-period-minimization]) after a certain transient has passed (such that we presume we have converged to the cycle after the transient), i.e., on the interval with indices from $i_{\mathrm{start}}$ to $i_{\mathrm{end} - T}$.:::### Shift Searching {#sec-shift-searching}After one has successfully estimated the period in the joint model, what remains is to approximate the shift between the two oscillators of ([-@eq-general-delayed-coupled-oscillators]). This can be achieved by solving a very similar problem ([-@eq-discretized-period-minimization]), namely$$\estim \beta_d = \argmin_{\beta \in Q} \sum_{i = i_{\mathrm{start}}}^{i_{\mathrm{end} - \beta}} d\brackets{\vi x_1^i, \vi x_2^{i + \floor{\beta/\diff t}}},$$ {#eq-discretized-shift-minimization}where $Q = (q_-, q_+)$ is a chosen interval with $0 \leq q_- < q_+ < \estim T$ and $d$ is a metric. Commonly, metrics induced by $\ltwo$-norm $\norm{\cdot}_2$ or maximum-norm $\norm{\cdot}_{\infty}$ are used. Moreover, $i_{\mathrm{start}}$ can again be used to offset the computation of shift $\beta$ after a transient has passed.Note that we shall primarily focus on calculating a shift $\beta$ between periodic trajectories of a notable component $j$ in each of the oscillators, i.e.,$$\estim \beta_{d,j} = \argmin_{\beta \in Q} \sum_{i = i_{\mathrm{start}}}^{i_{\mathrm{end} - \beta}} d\brackets{x_{1,j}^i, x_{2,j}^{i + \floor{\beta/\diff t}}}.$$ {#eq-1d-discretized-shift-minimization}In total, ([-@eq-discretized-shift-minimization]) and ([-@eq-1d-discretized-shift-minimization]) both lead to a one-dimensional optimization problem on a given interval and as such can be solved using methods proposed in @sec-1d-optimization. In particular, in the case of shift searching, the unimodality of the corresponding loss function seems to be satisfied more generally.For a comparison of computed shifts in regards to period estimation based on differences and optimization, see Figures [-@fig-cold-diff-shift] and [-@fig-cold-diff-shift-delay]. There, it is apparent the central Arnold tongue has a rather sharp boundary at some places, which is desired from the theoretical point of view, see @Kuznetsov2023 and @Zathurecky2025, as it is known that Arnold tongues are bordered by limit point of cycle curves, see @nte-arnold-tongues. On the other hand, for both low and high values of $\lambda$, our approximation of the tongue is either smoothed out (likely signaling that our choice of the initial condition was suboptimal) or scattered.::: {.callout-note #nte-arnold-tongues}##### Arnold tongueIn general, Arnold tongues are regions (in parameter space) of rational^[By *rational synchrony* we mean phase-locked synchronization in the form $p:q$, $p,q \in \N$ indivisible, i.e., one period consists of $p$ revolutions around the first axis and $q$ revolutions around the second. For two weakly coupled oscillators, this translates to a period of the full system being completed after $p$ periods of the first oscillators and $q$ of the second one.] synchrony near a Neimark-Sacker bifurcation. For a continuous-time dynamical system, this, in essence, means that a given cycle switches stability and a torus with the opposite stability emerges around it (i.e., a stable cycle loses stability when it undergoes the supercritical Neimark-Sacker bifurcation while a stable torus appears around the now-unstable cycle). The dynamics on the torus then provide the two "directions of oscillation", for which we can study its synchrony. Furthermore, two weakly coupled oscillators themselves are governed by dynamics on a torus, such that the influence of coupling strength $\lmbd$ mimics the transition generically originating from the Neimark-Sacker bifurcation.:::```{julia}#| echo: false#| output: falseusingColorSchemes, JLD2CyclicZissou =ColorScheme([ColorSchemes.Zissou1.colors..., reverse(ColorSchemes.Zissou1.colors)...]);```::: {#fig-cold-diff-shift layout-ncol="2"}```{julia}#| echo: false#| label: fig-cold-diff-shift-a#| fig-cap: "*Differences* period estimation"jldopen("../data/no_delay_cold_diff_no_checker.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend``````{julia}#| echo: false#| label: fig-cold-diff-shift-b#| fig-cap: "*Optimal* period estimation"jldopen("../data/no_delay_cold_optimal_no_checker_among_halves.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend```Heatmap of shifts $\beta$ between two $\lmbd$-weakly coupled interneuron models with respect to $C_2$ without delay. Grid is determined as 75-point discretizations of intervals of interest for both $C_2$ and $\lambda$. At every point the same initial condition $\vi \phi(\cdot) \equiv \vi u_0$ was used. The left figure presents the shift heatmap based on the *differences* period, whereas the right figure shows the *optimal* period-based shift. Trajectories are computed for 1500 ms, periods are computed on the last 20%, and shifts on the last 10% of their lengths. Note that the optimal period is calculated by halving the upper constraint up to 4 times. The shift is estimated based on the first state variable, i.e., $V$, in both interneurons.:::::: {#fig-cold-diff-shift-delay layout-ncol="2"}```{julia}#| echo: false#| label: fig-cold-diff-shift-delay-a#| fig-cap: "*Differences* period estimation"jldopen("../data/delay_cold_diff_no_checker.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend``````{julia}#| echo: false#| label: fig-cold-diff-shift-delay-b#| fig-cap: "*Optimal* period estimation"jldopen("../data/delay_cold_optimal_no_checker_among_halves.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend```Heatmap of shifts $\beta$ between two $\lmbd$-weakly coupled interneuron models with respect to $C_2$ with delay $\tau = 0.05$. Note that except for the time integration (performed by the continuous ODE method, see @sec-integrating-ddes), the same methods were used as in @fig-cold-diff-shift.:::### Starters, Iterators, and Indexers {#sec-starters-iterators-indexers}#### StartersSo far, we have utilized a rather naive approach of always using the same predetermined starting condition --- we shall refer to this as a *cold* start^[For reference on the terminology, see the [cold start Wikipedia entry](https://en.wikipedia.org/wiki/Cold_start_(recommender_systems)).]. Nevertheless, we have no guarantee such a choice was optimal for any grid point, much less for the entire grid. A natural solution to this problem is to reuse trajectories we have already computed for some other combination of parameters. In general, it is going to be beneficial to use choose as an initial condition the endpoint of the trajectory corresponding to parameter values *close* to our current ones. Moreover, we want to consider only those nearby parameter values such that their trajectories lead to anti-phase synchronization. In total, the optimal reference index $\starter(\vi i) \in \indicesOf{\meshGrid}$, from which to read the new initial condition, for the shift computation at $\vi i$ (using $\vi i \in \indicesOf{\meshGrid}$ to denote the combined index in both axis of the grid $\meshGrid$) is given by the following minimization problem,$$\starter_{\indicesOf{\meshGrid}}(\vi i) = \argmin_{\vi j \in \indicesOf{\meshGrid}} \loss(\vi i, \vi j, \beta_{\vi j}),$$ {#eq-mesh-warm-start-minimization}where we use either the *multiplicative* loss, $$\loss_{\mathrm{mult}}(\vi i, \vi j, \beta) = \norm{\vi i - \vi j} \cdot \brackets{\absval{\beta - \frac 1 2} + c},$$ {#eq-starter-multiplicative-loss}where $c > 0$ is an offset to the discrepancy in the shift from anti-phase, or the *additive* loss,$$\loss_{\mathrm{add}}(\vi i, \vi j, \beta) = c_1 \norm{\vi i - \vi j} + c_2 \absval{\beta - \frac 1 2},$$ {#eq-starter-additive-loss}where $c_1, c_2 > 0$ are chosen weights, see GridWalker.jl [Git repository](https://gitlab.com/sceptri-university/gamu/gridwalker.jl/-/blob/main/src/WalkerParameters.jl?ref_type=heads#L28-29). Note that we will use additive loss ([-@eq-starter-additive-loss]) if not stated otherwise, as it seemed to perform generally better for our needs. Moreover, we deduced from our experimentation that it was better to prioritize minimizing the loss difference as opposed to distance^[As will be evident later, by using an *indexer* we limit possible discrepancy in indices (i.e., the parametric distance). This effectively supercharges the effect of $c_1$ in ([-@eq-starter-additive-loss]), as it is technically included in the construction of the indexer (thus forcing us to prioritize shift discrepancy).].While this approach is ideal in theory, in practice we run into two main issues. Firstly, during the computation, some of the indices are not yet evaluated, thus we do not know their respective values of shifts $\beta_{\vi i}$. This can be remedied by defining the set of all *evaluated* indices $\IndicesOf{\meshGrid}{E}$ and restricting ([-@eq-mesh-warm-start-minimization]) to it, yielding$$\starter_{\IndicesOf{\meshGrid}{E}}(\vi i) = \argmin_{\vi j \in \IndicesOf{\meshGrid}{E}} \loss(\vi i, \vi j, \beta_{\vi j}).$$ {#eq-warm-start-minimization}#### IndexersThe second issue is present even in ([-@eq-warm-start-minimization]), and it is the dependence on the entire grid. Indeed, for distant indices the loss function, be it ([-@eq-starter-multiplicative-loss]) or ([-@eq-starter-additive-loss]), will be large and irrelevant for the minimum. Hence, we can define an *indexer* function$$\indexer : \indicesOf{\meshGrid} \to 2^{\indicesOf{\meshGrid}}$$ {#eq-indexer-def}specifying a permissible neighborhood for every point of the grid. Arnold tongues have a distinct shape, see @Sevcik2021, @Kuznetsov2023, or @Zathurecky2025, which we can exploit to restrict the search space of a starter. Indeed, if we assume the Arnold tongue with the anti-phase synchrony is present approximately in the middle of our grid $\meshGrid$, then elements closer to the center are more likely to live inside the tongue (and therefore be a good initial guess for anti-phase synchronized trajectories for nearby parameter values). Thus, choosing a starter only among evaluated indices on the diagonal from the current index to the center of $\indicesOf{\meshGrid}$ up to some distance away, as can be seen in the following illustration, should not worsen our initial guess as compared to the full *warm starter*.$$\begin{array}{cccccc} & & & & & \text{center of } \indicesOf{\meshGrid} \\ & & & & \nearrow & \\ \circ_3 & \circ_3 & \circ_3 & \circ_3 & & \\ \circ_2 & \circ_2 & \circ_2 & \circ_3 & & \\ \circ_1 & \circ_1 & \circ_2 & \circ_3 & & \\\bullet_{\vi i} & \circ_1 & \circ_2 & \circ_3 & &\end{array}$$We refer to the described method as the *diagonal indexer* of length $l_{\indexer} \in \N$ (example shown with $l_{\indexer} = 3$). The combination of a *warm starter* with *indexer*, which we shall call an *indexed starter*, yields the minimization problem$$\starter_{\indexer_E}(\vi i) = \argmin_{\vi j \in \IndicesOf{\meshGrid}{E} \cap \indexer(\vi i)} \loss(\vi i, \vi j, \beta_{\vi j}).$$ {#eq-warm-start-diagonal-indexer}Depending on the choice of the indexer, an indexed starter might bring a very important computational benefit of *constant complexity* with respect to grid size. Indeed, by employing a fixed-length diagonal indexer, the indexed starter^[The naive warm starter has a time complexity $O(N_{\Omega} \cdot N_{\Lambda})$ when considering the computation of shift to have constant complexity for all indices $\vi i$.] $\starter_{\indexer_E}(\vi i)$ requires comparison of at most $l_{\indexer}^2 - 1$ values of $\loss(\vi i, \vi j, \beta_{\vi j})$, where $l_{\indexer} \ll \minOf{N_{\Omega}, N_{\Lambda}}$.#### IteratorsSo far, we have discussed how to choose initial conditions from endpoints of already computed trajectories and how to limit the number of candidate indices. The last undetermined process is the actual order of evaluation of indices. For cold starts, the order could be arbitrary, as it did not influence the rest of the algorithm. However, for *warm* ([-@eq-warm-start-minimization]) and *indexed* starters ([-@eq-warm-start-diagonal-indexer]), we have an explicit dependence on already evaluated indices and their corresponding trajectories. In both cases, if no viable indices are found, the starter falls back to a fixed initial condition (i.e., to cold starter).The "natural" order of iteration along the mesh is deduced from its representation in the Julia code as two separate ranges. Then iterating over their cartesian product, i.e., the grid $\meshGrid$, amounts to subsequently evaluating each column of $\meshGrid$, see @fig-iterators-a. Notice that such evaluation order is suboptimal for the *diagonal indexer*, which will likely default to a cold start for indices on the left side of the grid.```{julia}#| echo: false#| output: falseusingGridWalkerl =10λ =range(start=0.0, stop=0.035, length=l)C₂ =range(start=0.95, stop=1.05, length=l)walker_params =parametrize_walker(; min_x_param=1, max_x_param=1, x_param_density=1, min_y_param=1, max_y_param=1, y_param_density=1)order_of_iteration(enumerator) =begin I𝕄 =zeros(l, l) k =1for (i, _) inenumerator(C₂, λ, walker_params) I𝕄[i...] = k k +=1endreturn I𝕄end```::: {#fig-iterators layout-ncol="3"}```{julia}#| echo: false#| label: fig-iterators-a#| fig-cap: "Line iterator"fig =Figure()ax =Axis(fig[1, 1], xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2)))heatmap!(ax, C₂, λ, order_of_iteration(GridWalker.line_enumerator), colormap=:grays)fig``````{julia}#| echo: false#| label: fig-iterators-b#| fig-cap: "Centered spiral iterator"fig =Figure()ax =Axis(fig[1, 1], xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2)))heatmap!(ax, C₂, λ, order_of_iteration(GridWalker.spiral_enumerator), colormap=:grays)fig``````{julia}#| echo: false#| label: fig-iterators-c#| fig-cap: "Shifted spiral iterator"shifted_spiral = (args...) -> GridWalker.spiral_enumerator(args...; center_function=GridWalker.relative_center, fixed_ratio_center=(0.4, 0.8))fig =Figure()ax =Axis(fig[1, 1], xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2)))heatmap!(ax, C₂, λ, order_of_iteration(shifted_spiral), colormap=:grays)fig```Heatmaps indicating evaluation order (from black to white) for different kinds of iterators from `GridWalker.jl`.:::From the perspective of stability of tongue detection, a more robust strategy is to start the evaluation order from the (presumed) center of the tongue and spirally iterate outwards. This way, an indexed starter can likely initialize from a parametrically nearby trajectory in anti-phase synchronization because suitable candidates are evaluated first, see @fig-iterators-b and @fig-iterators-c.All these concepts can now be combined. In @fig-indexed-shift, one can see the same heatmaps corresponding to the shift between two weakly coupled interneurons as in Figures [-@fig-cold-diff-shift] and [-@fig-cold-diff-shift-delay]. In particular, an indexed starter $\starter_{\indexer}$ was used with diagonal indexer of length $l_{\indexer} = 10$ on a 75-by-75 grid $\meshGrid$. An additive loss ([-@eq-starter-additive-loss]) was used for the starter in conjunction with a spiral iterator from the true center. The rest of the experiment setup follows @fig-cold-diff-shift.::: {#fig-indexed-shift layout-ncol="2"}```{julia}#| echo: false#| label: fig-indexed-shift-a#| fig-cap: "$\\tau = 0$"jldopen("../data/no_delay_spiral_indexed_optimal_no_checker_among_halves.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend``````{julia}#| echo: false#| label: fig-indexed-shift-b#| fig-cap: "$\\tau = 0.025$"jldopen("../data/delay_spiral_indexed_optimal_no_checker_among_halves.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend```Heatmap of shifts $\beta$ between two $\lmbd$-weakly coupled interneuron models with respect to $C_2$ with and without delay. The heatmaps are computed by utilizing introduced techniques of starters, indexers, and iterators. The rest of the procedure follows @fig-cold-diff-shift and @fig-cold-diff-shift-delay.:::Notice that in @fig-indexed-shift-a, the boundary of the tongue for higher coupling strengths is sharper than in @fig-cold-diff-shift. Also, sub-tongues present in all previous results now appear thinner and more distorted.### Periodicity CheckersIn Figures [-@fig-cold-diff-shift] to [-@fig-indexed-shift], we have always computed a trajectory for each parameter combination belonging to a point on the grid $\meshGrid$ and then attempted to find the corresponding period and shift. This relied on the assumption that the (joint) trajectory is always periodic, and thus all necessary quantities are well-defined and can be computed. In case the assumption does not hold, we will still get an output, but potentially a nonsensical one.In other words, one should check the periodicity of the trajectory for each parameter combination. If periodicity is not confirmed, we can skip the rest of the algorithm, saving valuable computing time.There are several ways one might go about determining periodicity. One possibility is to consider a band around a reference value with respect to $\vi x_{1,i}$, the $i$-th coordinate of the state corresponding to the first oscillator --- local maxima work especially well for this approach. Then we can select indices of all data points lying inside this band and study the distribution of $i$-th coordinate of trajectory points for the 2nd oscillator, i.e., $\vi x_{2,i}$, at these indices. However, this method is very sensitive to the width of the band and the reference value, rendering it almost unusable in practice.An alternative approach is to detect peaks of $\vi x_{1,i}$, the $i$-th coordinate of the trajectory of the 1st oscillator. This again prescribes an index set $I$, which we can apply to $\vi x^I_{2,i}$. Then, we can check whether $\vi x^I_{2,i}$ follows chosen criteria, e.g., stay within a predefined range. If this check is successful, we assume the trajectory as a whole is periodic and execute the rest of the procedure, see @fig-checked-shift.::: {.callout-note #nte-peak-periodicity-checker}##### Peak-based periodicity checkerA careful reader might notice the peak-based periodicity checker closely resembles *differences* period searcher, see @sec-period-searching. Indeed, it should be possible to merge these two concepts into a single algorithm. Nevertheless, we decided against it for code-readability reasons. Moreover, the computational complexity of the peak-based periodicity checker is almost negligible when considering the entire run of the algorithm over the whole grid, even more so when taking into account the time saved on shift searching for skipped indices $\vi i \in \indicesOf{\meshGrid}$.:::::: {#fig-checked-shift layout-ncol="2"}```{julia}#| echo: false#| label: fig-checked-shift-a#| fig-cap: "$\\tau = 0$"jldopen("../data/no_delay_spiral_indexed_optimal_peak_checker_among_halves.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend``````{julia}#| echo: false#| label: fig-checked-shift-b#| fig-cap: "$\\tau = 0.025$"jldopen("../data/delay_spiral_indexed_optimal_peak_checker_among_halves.jld2", "r") do data fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend```Heatmap of shifts $\beta$ between two $\lmbd$-weakly coupled interneuron models with respect to $C_2$ with and without delay, such that for each parameter combination a periodicity is tested before continuing with the computation. The rest of the computation follows @fig-indexed-shift.:::#### Sub-tonguesThroughout this chapter, we have discussed and implemented various measures to ensure optimal estimation of the true shift between the oscillators. However, none of our modifications have removed the sub-tongues, which can be seen in every shift heatmap. By examining the trajectories at their corresponding points, one can verify both the periodicity and the computed shift. Their emergence is likely due to our choice of the methodology. In particular, they are probably caused by either insufficiently long transient (i.e., the convergence to the attractor is very slow and the oscillators remain near the initial anti-phase), the discretization of the parameter-space to the grid, or even possibly the choice of norms/distances for period and shift searching. Either way, these phenomena could produce a similar fractal structure.However it may be, this is *the* problem of the grid simulation, but also in a way its strength. Indeed, from what we have computed, we cannot guarantee these sub-tongue regions of anti-phase synchronization actually exist and are not just artifacts from our calculation. Nevertheless, the grid simulation *along with a suitable choice of iterator* might tell us about the behavior of the studied system in nature, because it reflects a continuous change of parameters, which might not always have enough time to converge to the real attractor. Note that the *spiral iterator* is an especially good choice, as it radially propagates the parametric change, avoiding sudden jumps and discontinuities.### Multi-threading and Grid ComputingThe Figures [-@fig-cold-diff-shift] - [-@fig-checked-shift] presented so far have all been computed on a Thinkpad T490s with Intel Core i7-8565U CPU and 16 GiB of RAM. We have utilized at most 6 threads and the computations took at most approximately 3 minutes. However, if we were to increase the grid density, the computation time would increase quadratically. Indeed, the calculation of a 1200-by-1200 grid would take at least approximately 768 minutes (considering 16 GiB of RAM would be enough, and thus swapping would not be needed, which could cause a significant slow-down).Note that multi-threading is not trivial to implement for our algorithm. A wrong execution order might entirely defeat the purpose of the chosen iterator. Therefore, we must force each thread to evaluate correct indices, e.g., $l$-th thread might be restricted to $\Seqnc {\vi i_{l(k+1) - 1}} {k = 0} {N_l}$ (for appropriate $N_l < N_{\meshGrid}$).In general, scientific computing is often too demanding for a personal computer. One possible solution is to offload the computation to grid infrastructure, namely [MetaCentrum](https://metavo.metacentrum.cz/en/). > MetaCentrum is the leading national grid computing infrastructure in the Czech Republic, providing researchers, scientists, and students with advanced computing resources to tackle complex computational challenges.For illustration purposes, we ran the script generating @fig-checked-shift for a 1200-by-1200 grid instead of 75-by-75, see @fig-metacentrum-shift. As we have said, on a regular consumer-grade laptop, such computation would take at the *very least* 750 minutes simply extrapolating from the 75-by-75 grid computation. On MetaCentrum, we reserved 20 CPU cores, 36 GiB of RAM, and 150 GiB scratch storage with a maximal runtime of 16 hours. Using this setup, the computation took 82 minutes for the experiment without delays and 443 minutes with delay.::: {#fig-metacentrum-shift layout-ncol="2"}```{julia}#| echo: false#| label: fig-metacentrum-shift-a#| fig-cap: "$\\tau = 0$"jldopen("../data/no_delay_computation_metacentrum.jld2", "r") do data CairoMakie.activate!(type="png") fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend``````{julia}#| echo: false#| label: fig-metacentrum-shift-b#| fig-cap: "$\\tau = 0.025$"jldopen("../data/delay_computation_metacentrum.jld2", "r") do data CairoMakie.activate!(type="png") fig =Figure(size=(600, 400)) ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"C_2", ylabel=L"\lambda", xtickformat=v ->latexstring.(round.(v, digits=2)), ytickformat=v ->latexstring.(round.(v, digits=2))) β_ticks = (0:0.25:1, [L"0", L"\frac{1}{4} T", L"\frac{1}{2} T", L"\frac{3}{4} T", L"T"]) hmap =heatmap!(ax,range(data["C₂"]...),range(data["λ"]...),transpose(data["shifts"]), colormap=CyclicZissou, colorrange=(0, 1))Colorbar(fig[1, 2], hmap; label=L"\beta", ticks=β_ticks) figend```Heatmap of shifts $\beta$ between two $\lmbd$-weakly coupled interneuron models with respect to $C_2$ with and without delay computed on MetaCentrum grid. Save for grid density, the setup of the experiment mimics @fig-checked-shift.:::::: {.callout-caution #cau-stochastic-computation}##### Stochastic simulationsSo far, we have only used deterministic simulations for every computation of the shifts heatmap. In theory, one could use the Euler-Maruyama method, see @alg-euler-maruyama-method, to generate a stochastic realization of the heatmap. Then, using a sort of Monte Carlo approach, we could repeat the stochastic grid generation to obtain the distribution of shift for each point of the discretized grid. However, such modification would drastically increase the computational time simply due to the repeated computations, but also because of the less "efficient" Euler-Maruyama solver (as compared to Runge-Kutta methods or similar).:::