Dynamic instabilities in scalar neural field equations with space-dependent delays

In this paper we consider a class of scalar integral equations with a form of space-dependent delay. These non-local models arise naturally when modelling neural tissue with active axons and passive dendrites. Such systems are known to support a dynamic (oscillatory) Turing instability of the homogeneous steady state. In this paper we develop a weakly nonlinear analysis of the travelling and standing waves that form beyond the point of instability. The appropriate amplitude equations are found to be the coupled mean-field Ginzburg-Landau equations describing a Turing-Hopf bifurcation with modulation group velocity of O(1). Importantly we are able to obtain the coefficients of terms in the amplitude equations in terms of integral transforms of the spatio-temporal kernels defining the neural field equation of interest. Indeed our results cover not only models with axonal or dendritic delays but those which are described by a more general distribution of delayed spatio-temporal interactions. We illustrate the predictive power of this form of analysis with comparison against direct numerical simulations, paying particular attention to the competition between standing and travelling waves and the onset of Benjamin-Feir instabilities.


Introduction
The ability of neural field models to exhibit complex spatio-temporal dynamics has been studied intensively since their introduction by Wilson and Cowan [1]. They have found wide application in interpreting experiments in vitro e.g. electrical stimulation of slices of neural tissue [2][3][4][5] and phenomena in vivo such as the synchronisation of cortical activity during epileptic seizures [6] or uncovering the mechanism of geometric visual hallucinations [7][8][9]. The sorts of dynamic behaviour that are typically observed in neural field models includes, spatially and temporally periodic patterns (beyond a Turing instability) [7,8], localised regions of activity (bumps and multi-bumps) [10,11] and travelling waves (fronts, pulses, target waves and spirals) [12,13]. The equations describing the evolution of the activity in the neural field typically take the form of integro-differential or integral equations. A variety of modifications have been put forward adding various biological mechanisms to the original model. For a recent review see [14].
It is the purpose of this paper to consider in more detail the role of axonal and dendritic delays in generating novel spatio-temporal patterns. Specifically, we are interested in patterns emerging via Turing-type instabilities of the homogeneous steady state. There are four different types of instability that generically occur, giving rise to i) shift to a new uniform stationary state, ii) stationary periodic patterns, iii) uniform global (bulk) oscillations and iv) travelling (oscillatory) periodic patterns. We shall refer to i) and ii) as static instabilities and iii) and iv) as dynamic. Note that in the original two-population model (without space-dependent delays) developed by Wilson and Cowan [1] there are two time scales and two space-scales: the different membrane time-constants for the excitatory and inhibitory synapses, and the associated spatial interaction scales. However, in order to decrease the dimensionality of the system it is common to assume that inhibitory synapses act much faster than the excitatory. The unequal finite spatial scales are preserved in the form of the connectivity kernel of the new single equation (often of Mexican hat shape), but one of the temporal scales is lost. This is the type of reduced system we consider here, with the notable exception that we bring back another time-scale associated with a space-dependent delay. Space-dependent delays arise naturally through two distinct signal processing mechanisms in models of neural tissue. Axonal delays are associated with the finite speed of action potential propagation. In models with dendrites there is a further distributed delay associated with the arrival of input at a synapse away from the cell-body. It is precisely the inclusion of these biological features that can give rise to not only static, but dynamic Turing instabilities. For examples of the treatment of truly two-population models and the possibility of oscillatory pattern formation without space-dependent delays we refer the reader to Tass [8] and Bressloff and Cowan [9].
Axonal delays arise due to the finite speed of action potential propagation in transferring signals between distinct points in the neural field and are modelled in the work of [15,1,16] as simple space-dependent delays. Delays arising from the processing of incoming synaptic inputs by passive dendritic trees may also be incorporated into neural field models, as in the work of Bressloff [17]. In both cases it is now known that these space-dependent delays can lead to a dynamic Turing instability of a homogeneous steady state. These were first found in neural field models by Bressloff [17] for dendritic delays and more recently by Hutt et al. [18] for axonal delays. Both these studies show that a combination of short range inhibition and longer range excitation with a space-dependent delay may lead to a dynamic instability. This choice of connectivity, which we shall call inverted Mexican hat, is natural when considering cortical tissue and remembering that principal pyramidal cells i) are often enveloped by a cloud of inhibitory interneurons, and ii) that long range cortical connections are typically excitatory [19][20][21]. Detailed examination by linear analysis on the relation between the connectivity shape (the balance between excitation and inhibition) and the dominant pattern type has been done by Hutt [22]. Roxin et al. did similar work for a model with fixed discrete delay [23].
Our main result will be to go beyond the linear analysis of Bressloff [17] and Hutt et al. [18] and to develop amplitude equations for a one dimensional scalar neural field equation with space-dependent delays. Although borrowing heavily from techniques in the PDE community for the weakly nonlinear analysis of states beyond a Turing bifurcation [24][25][26], our analysis is complicated by the fact that it deals with pattern forming models described by integral equations. Previously amplitude equations in the context of neural field models have been derived also in [27][28][29]. Importantly we work with integral equations describing neural fields with both axonal and dendritic processing. Although they have existed as models for some time they are far less studied than models lacking such biologically realistic terms. Our formulation is general and encompasses a number of such models.
When deriving the amplitude equations we consider also the effects of space-dependant modulation and show that the appropriate equations are the mean-field Ginzburg-Landau equations [30].
In Section 2 we introduce the class of models we shall consider and describe how they can be written as integral equations with a particular spatio-temporal convolution structure.
Next in Section 3 we analyse the linear stability of the homogeneous steady state and derive the conditions for the onset of a dynamic Turing (Turing-Hopf) instability point. In Section 4 we derive the amplitude equations via a multiple-scales analysis. These amplitude equations are analysed in Section 5 to determine the selection process for travelling as opposed to standing waves. Moreover, we also consider Benjamin-Feir modulational instabilities in which a periodic travelling wave (of moderate amplitude) loses energy to a small perturbation of other waves with nearly the same frequency and wavenumber. Numerical experiments are presented to illustrate and support our analysis of the amplitude equations. In Section 6 we consider the generalisation of our approach to tackle models with a form of spike frequency adaptation. We show that this can strongly influence the selection process for travelling versus standing waves. Finally in Section 7 we discuss other natural extensions of the work we have presented in this paper.

The model
In many continuum models for the propagation of electrical activity in neural tissue it is assumed that the synaptic input current is a function of the pre-synaptic firing rate function [31,15,10,32]. When considering a simple one-dimensional system of identical neurons communicating through excitatory and inhibitory synaptic connections it is typical to arrive at models of the form Here, u(x, t) is identified as the synaptic activity at position x ∈ R at time t ∈ R + . The firing rate activity generated as a consequence of local synaptic activity is simply f (u), where the firing rate function f is prescribed purely in terms of the properties of some underlying single neuron model or chosen to fit experimental data. A common choice for the firing rate function is the sigmoid with steepness parameter β > 0 and threshold h > 0. The spatial kernel w(x) = w(|x|) describes not only the anatomical connectivity of the network, but also includes the sign of synaptic interaction. For simplicity it is also often assumed to be isotropic and homogeneous, as is the case here. The temporal convolution involving the kernel η(t) (η(t) = 0 Left: All synaptic inputs impinge on the same site at a distance z 0 from the soma. Right: Here the distance of the synapse from the soma is linearly correlated with the spatial separation |x−y| between neurons.
for t < 0) represents synaptic processing of signals within the network, whilst the delayed temporal argument to u under the spatial integral represents the delay arising from the finite speed of signals travelling between points x and y; namely |x − y|/v where v is the velocity of action potential. In models with dendrites there is a further space-dependent delay associated with the processing of inputs at a synapses away from the cell-body. A simple way to incorporate this in a model is to represent the dendritic tree as a single passive cable with diffusive properties (see below).
By introducing the kernel K(x, t) = w(x)δ(t − |x|/v) we can re-write (1) in the succinct where we have introduced the two-dimensional convolution operation ⊗: for g = g(x, t), and the temporal convolution operation * : for h = h(t). The form of (3) allows us to naturally generalize a model with axonal delays to one with dendritic delays or any other type of time-dependent connectivity. For example the model of Bressloff [17,33] incorporating a neuron with a semi-infinite dendritic cable with potential v = v(z, t), z ∈ R + is written Here w(x, z) is an axo-dendritic connectivity function depending not only upon cell-cell distances |x|, but on dendritic distances z also. Assuming that there is no flow of current back from the cell body (soma) at z = 0 to the dendrite then the neural field equation is simply given by u(x, t) = v(x, 0, t). Hence, this dendritic model is recovered by (3) with Green's function of the semi-infinite cable equation (and E(z, t) = 0 for t < 0). With a single synapse at a fixed distance z 0 > 0 from the cell body w(x, z) = w(x)δ(z − z 0 ), the generalized connectivity function is separable, taking the form K(x, t) = w(x)E(z 0 , t). We will also look at non-separable models with space-dependent dendritic delays w(x, z) = w(x)δ(z − z 0 − φ|x|). In these models axons from more distant neurons arborize further up the dendritic cable in accordance with anatomical data [34]. See Fig. 1 for an illustration.
In fact throughout this paper we shall consider the model as written in the form of (3), and allow for arbitrary choices of K = K(x, t), subject to K(x, t) = K(|x|, t) and K(x, t) = 0 for t < 0, such that the classic axonal and dendritic delay models are recovered as special cases. Later in Section 6 we shall consider further generalizations of (3) to include the effects of neuronal modulation and adaptation.

Turing instability analysis
In this paper we are primarily interested in the analysis of pattern formation beyond a Turing instability for the neural field equation given by (3). It is the dependence of K on the pair (x, t) rather than just x, as would be the case in the absence of space-dependent delays, that can give rise to not only static, but dynamic Turing instabilities. Here we briefly re-visit the standard linear Turing instability analysis, along similar lines to that of Bressloff [17] and Hutt et al. [18]. For a general review of pattern formation and pattern forming instabilities we refer the reader to [35][36][37][38]. For a discussion of pattern formation within the context of neural field models we suggest the articles by Ermentrout [39] and Bressloff [40].
Let u(x, t) = u be the spatially-homogeneous steady state of equation (3). The conditions for the growth of inhomogeneous solutions can be obtained through simple linear stability analysis of u. Namely, we look for instabilities to spatial perturbations of globally periodic type e ikx and determine the intrinsic wavelength 2π/k of the dominant growing mode.

From (3) the homogeneous steady state satisfies
where we have introduced the following Laplace and Fourier-Laplace integral transforms: We Taylor-expand the firing rate function, which is the only nonlinearity in the system, The control parameter for our bifurcation analysis is therefore γ 1 = f ′ (u). To obtain the conditions for linear stability we consider solutions of the form u − u = Re e λt+ikx where λ = ν + iω ∈ C. Although the system and its solutions are real, for conciseness we will often work with the complex extension and restrict our attention to the real case when this has implications for the analysis. After substitution into (9) we obtain a dispersion relation for (k, λ) in the form L(k, λ) = 0, where For a fixed γ 1 solving L(k, λ) = 0 defines a function λ(k). An instability occurs when for the first time there are values of k at which the real part of λ is nonnegative (see Fig. 2, left). A Turing bifurcation point is defined as the smallest value γ c of the control parameter for which there exists some k c = 0 satisfying Re (λ(k c )) = 0. It is said to be static if Im (λ(k c )) = 0 and dynamic if Im (λ(k c )) ≡ ω c = 0. The dynamic instability is often referred to as a Turing-Hopf bifurcation and generates a global pattern with wavenumber k c , which moves coherently with a speed c = ω c /k c , i.e. as a periodic travelling wave train.
If the maximum of the dispersion curve is at k = 0 then the mode that is first excited is another spatially uniform state. If ω c = 0, we expect the emergence of a homogeneous limit cycle with temporal frequency ω c .
For simplicity we will assume that at γ 1 = γ c the associated value of k c is unique (up to a sign change). Since K is an even function of the space variable, wave-numbers will always come in pairs ±k c and because the problem is real the dispersion curve has reflective symmetry around the real axis and frequencies ±ω c will also appear together as solutions.
Hence, the eigenspace of the linear equation will be real and at γ c a complete basis is given by Beyond a Turing instability (k c = 0) we expect the growth of patterns of the form e i(kcx±ωct) + cc (where cc stands for complex conjugate), with ω c = 0 for a static instability, ω c = 0 for a dynamic instability and k c = 0 at a Hopf bifurcation. Note that for a dynamic instability, generically giving rise to periodic travelling waves, it may also be possible to see a standing wave, arising from the interaction of a left and right travelling wave. To determine the conditions under which this may occur it is necessary to go beyond a linear analysis and determine the evolution of mode amplitudes. The techniques to do this will be described in Section 4, where we also see that the growing solutions predicted by the linear analysis converge to patterns with certain finite amplitude due to the nonlinear (saturating) properties of the model.

Next we consider some illustrative examples and compare the linear instability predictions
with direct numerical simulations. To do this we write λ = ν + iω and split the dispersion relation into real and imaginary parts to obtain  where G(ν, ω) = Re L(k, ν + iω) and H(ν, ω) = Im L(k, ν + iω). Solving the system of equations (12) gives us a curve in the plane (ν, ω) parameterised by k. A static bifurcation may thus be identified with the tangential intersection of ω = ω(ν) along the line ν = 0 at the point ω = 0. Similarly a dynamic bifurcation is identified with a tangential intersection at ω = 0 (see Fig. 2, right). We are required to track points in parameter space for which ν ′ (ω) = 0. This is equivalent to tracking points where

An example: axonal delays
We take a wizard hat connectivity function with space-dependent axonal delays such that K( in Section 2. The global weight w 0 allows us to study both inverted (w 0 > 0) and standard (w 0 < 0) wizard hat connectivities in a common framework. The Fourier-Laplace Further, we choose the alpha function synaptic response is the Heaviside step function, Θ(t) = 1 for t ≥ 0 and Θ(t) = 0 otherwise. A simple calculation gives η(λ) = (1 + λ/α) −2 . The dispersion relation (10) can then be written as a 6th-order polynomial of the form 6 n=0 a n (α, v, γ 1 , k)λ n = 0. The expressions for the coefficients a n are easily calculated, though for brevity we do not list them here.
Full details will be given elsewhere [41]. Using numerical continuation we can track the set of solution points (γ c , k c , ω c ) of the system of equations defined by (12) and (13) in system parameter space. Moreover, we can construct a critical curve in (v, γ 1 )-parameter space delimiting the region of stability of the homogeneous steady state. Here we track all six roots of the dispersion relation and find that for w 0 > 0 two pairs of complex roots can cross into the positive half-plane with increasing γ 1 . One of these pairs crosses with nonzero wavenumber k c (v) and the other with k c = 0. A plot of all critical curves techniques used for direct numerical simulations of the models studied in this paper are described in [42].

An example: dendritic delays
We make the same choices of w(x) and η(t) for the dendritic model given by (6) as in the example just studied above with axonal delays with a synapse at a fixed distance z 0 from the soma. In this case the (separable) kernel describing the model is given by and a real Fourier transform w(k) = −4w 0 k 2 /(1 + k 2 ) 2 independent of λ. Hence, the critical wavenumber at all instabilities is constant k c = 1. Although there are countably many eigenvalues only two pairs can ever cross into the positive plane. The Turing instability curve for w 0 as a function of z 0 is shown in Fig. 5, left. If we consider the synaptic contact point to be linearly correlated with the distance between neuronal sites, then K(x, t) = w(x)E(z 0 + φ|x|, t).

Weakly nonlinear analysis
A characteristic feature of the dynamics of systems beyond an instability is the slow growth of the dominant eigenmode, giving rise to the notion of a separation of scales.
This observation is key in deriving the so-called amplitude equations. In this approach information about the short-term behaviour of the system is discarded in favour of a description on some appropriately identified slow time scale. By Taylor-expansion of the dispersion curve near its maximum one expects the scalings  close to bifurcation, where γ 1 is the bifurcation parameter. Since the eigenvectors at the point of instability are of the type A 1 e i(ωct+kcx) +A 2 e i(ωct−kcx) + cc, for γ 1 > γ c emergent patterns are described by an infinite sum of unstable modes (in a continuous band) of the form e ν 0 (γ 1 −γc)t e i(ωct+kcx) e ik 0 √ γ 1 −γcx . Let us denote γ 1 = γ c + ǫ 2 δ where ǫ is arbitrary and δ is a measure of the distance from the bifurcation point. Then, for small ǫ we can separate the dynamics into fast eigen-oscillations e i(ωct+kcx) , and slow modulations of the form e ν 0 ǫ 2 t e ik 0 ǫx . If we set as further independent variables τ = ǫ 2 t for the modulation timescale and χ = ǫx for the long-wavelength spatial scale (at which the interactions between excited nearby modes become important) we may write the weakly nonlinear solution It is known from the standard theory [35] that weakly nonlinear solutions will exist in the form of either travelling waves (A 1 = 0 We are now in a position to derive the envelope or amplitude equations for patterns emerging beyond the point of Turing-Hopf instability for the neural field model (3).
Here we omit the simpler case for the static Turing instability since it has already been analysed in [43]. For a PDE system the general form of the amplitude equations can be found very easily by considering the resonances in the system. Indeed the normal form of the Turing-Hopf bifurcation is well-known to be a system of two coupled complex Ginzburg-Landau equations [37]. To tackle an integral system we will use the lengthy but systematic approach of multiple scales analysis. By doing so we ultimately obtain the normal form for a bifurcation to patterns that have a non-zero group velocity. This type of normal form was first found in [30] in the context of travelling wave convection.

Scale separation
Introducing the further intermediate time scale θ = ǫt, solutions to (3) satisfy Note that the operator on the right-side of (16) is not identical to the convolution η * K⊗.
Here the integration acts on the last three variables as well, due to their ǫ-dependence on x and t. In order to find the equation describing the slow dynamics we have to assess this ǫ-dependent contribution. Here we will adopt an approach that considers a polynomial expansion of the right hand side of (16) in powers of ǫ. The coefficients of this polynomial are convolutions each acting at a single scale only. We begin by writing the solution as an asymptotic expansion with, as yet, unknown functions u i = u i (x, t). Further, we substitute the firing rate func- where the coefficients γ 2 , γ 3 , . . . remain as free parameters that will participate in the final amplitude equations. When we consider specific models in Section 5 we will map their firing rate parametrizations and restrict these coefficients. To separate the scales at which the integration acts, we write ǫy = ǫy + ǫx − ǫx = χ + ǫ(y − x), and similarly for ǫs and ǫ 2 s: This enables us to make a Taylor expansion in the last three arguments to get u i (y, s, ǫy, ǫs, ǫ 2 s) = u i (y, s, χ, θ, τ ) + ǫ (y − x) ∂ ∂χ The slow variables χ, θ, τ are independent of y or s on which the integration acts and we have a sum of convolutions of the type η * K⊗. In order to write down these convolutions it is convenient to introduce the notation: Then by using s − t = (r − t) + (s − r) (to step through the intermediate time scale), we We denote by M j the segregated action of operators (identified above) on the different scales. Hence, the model (3) separates into an infinite sequence of equations (which we truncate at third order) of the form:

Fredholm alternative
One can see that each equation above contains terms of the asymptotic expansion of u only of the same order or lower. This means that we can start from the first equation In terms of L the first equation (22) is Lu 1 = 0 and we see that our entire solution space is a perturbation of the kernel of L. In fact, we have already solved (22) in Section 3 and know that ker L = span cos(ω c t + k c x), cos(ω c t − k c x), sin(ω c t + k c x), sin(ω c t − k c x) , i.e. any solution u 1 ∈ ker L can be expressed as a linear combination where A 1,2 are arbitrary complex coefficients depending only on the slow space and time scales. We may now use the Fredholm alternative [44] to consider restrictions on the g n that will give us equations for the amplitudes A 1,2 . Since where the bar denotes complex conjugation. Introducing the functions η * (t) = η(−t) and K * (x, t) = K(x, −t) (η and K reflected around the point t = 0), it can be shown (using Fourier series) that the adjoint of L is the operator L * = I − γ c η * * K * ⊗, with respect to the inner product (26).
The operator L * has the same kernel space as L (since the dispersion relation is invariant under the change t → −t). From the Fredholm alternative we have for all u ∈ ker L that <u, g n > = <u, Lu n > = <L * u, u n > = 0. Therefore we can derive equations for the amplitudes simply by calculating the two complex projections <e i(ωct±kcx) , g n > = 0.
To simplify notation we set u ± = e i(ωct±kcx) . Then, the scalar products (27) expand as <u ± , g 2 > = γ 2 η K<u ± , u 2 1 > + γ c M ± 1 η K<u ± , u 1 > = 0 (28) and Here the Fourier-Laplace transforms (see (8)) η(λ) and K(k, λ) are all evaluated at the points k = k c and λ = iω c . By M ± j we denote the differential operators Since from (25) u 1 is itself a linear combination of the modes u ± , most scalar products in (28) and (29) are easily found: <u + , u 1 > = A 1 , <u − , u 1 > = A 2 , <u ± , u 2 1 > = 0, and The first solvability condition (28) reduces to M ± 1 η KA 1,2 = 0, namely The general solutions of these first-order PDEs are where ξ ± = χ ± v g θ and v g = ∂ω/∂k. This means that all slow amplitude modulations A 1,2 (ξ, τ ) will be travelling with an intermediate group speed v g to the left and right respectively. Notice that the repeated action of M ± 1 does not result in zero. We have To calculate the last remaining product <u ± , u 1 u 2 > in (29) we need to find u 2 first.
We can do this from the 2nd order equation (23). Since all operators in this equation are linear, u 2 will be some quadratic form of the basis vectors which we write as m,n∈{0,±1,±2} B m,n e i(mωct+nkcx) . To find the unknown B m,n we substitute this into (23) and match coefficients to obtain and and this expression will appear in the amplitude equation. By virtue of (35) and (36) the last remaining scalar products are calculated as with We are now in position to write out the third order solvability condition which gives the amplitude equations.

The amplitude equations
Substituting in (29) all necessary scalar products (calculated above) we obtain To eliminate the unknown coefficients B 1±1 we follow [30] and average the two equations in the ξ − and ξ + variables, respectively. For example, the only quantities in the first equation that vary with ξ − are B 11 and A 2 . However the average B 11 will be independent of ξ − and thus the derivative term vanishes. After averaging we obtain with appropriately modified parameters: The system of equations (44) are the coupled mean-field Ginzburg-Landau equations describing Turing-Hopf bifurcation with modulation group velocity v g of order one [30,45,46].
As a reminder of the interpretation of terms in the above equations note that η = η(iω c ), K = K(k c , iω c ) are the integral transforms (8)  are looking for -localised or periodic. Here we will assume A 1,2 are periodic with periods P 1,2 . In this case The form of the spatial cross-interaction term in (44) reflects the fact that since v g = O(1), as the two wavetrains move across each other they are not able to feel each other's spatial structure.
In the next section we show how to use the normal form to predict the formation of travelling or standing waves beyond the point of Turing instability.

Normal form analysis
The coupled complex Ginzburg-Landau equations are one of the most-studied systems in applied mathematics, with links to nonlinear waves, second-order phase transitions, Rayleigh-Bénard convection and superconductivity. They give a normal form for a large class of bifurcations and nonlinear wave phenomena in spatially extended systems and exhibit a large variety of solutions. Many of these appear due to the influence of the boundary conditions. Here our focus is on infinitely-extended systems. First let us neglect the diffusive terms in (44), which allows us to determine the conditions for the selection of travelling vs standing wave behaviour beyond a dynamic Turing instability.

Travelling wave versus standing wave selection
The space-independent reduction of (44) takes the form with small frequency change e 1 = a i − a r b i /b r . The subscripts r and i denote real and imaginary parts respectively. The standing wave, A 1 = A 2 is described by The linear stability analysis of the stationary states leads to the following conditions for selection between travelling wave (TW) or standing wave (SW): a r > 0, b r < 0, TW exists (supercritical) and for b r − c r > 0 it is stable, a r > 0, b r + c r < 0, SW exists (supercritical) and for b r − c r < 0 it is stable.
In the parameter regions where these stationary states of the ODE system (47)  a bounded function. However, the bifurcation is subcritical and higher-order terms are needed in the normal form to give us information about the amplitudes. We will not concern ourselves with these regions. We now translate the conditions above into the language of our neural field model. Since the only free variables are those describing the nonlinearity f (u), i.e. γ 2 and γ 3 , this motivates us to write out a r , b r , c r , using (45), as The expressionsé,f ,ǵ involve only the kernel transforms and the first derivative of the nonlinearity f which is fixed by the condition defining a Turing instability.
By simply examining the selection conditions it is possible to make some general statements about the the geometry of TW-and SW-selection parameter regions. However, for the specific type of models (3) we are concerned with, one can immediately see from the form of (50) that all regions in the (γ 2 , γ 3 ) space will have parabolic borders with a tip at the origin where γ 2 = γ 3 = 0, and that they may only differ in their curvature (second derivative). The only two possible qualitatively different configurations of parameter space are shown in Fig. 6. They are separated by the conditionf =ǵ. Forf >ǵ, there are regions of parameter space that will support stable travelling waves. However, in this case no stable SWs are possible. We call this configuration a TW-generator. Forf <ǵ, there are also regions of parameter space where standing waves are stable and TWs exist but are unstable. We call this configuration the TW/SW-generator. In a TW-generator SWs exist everywhere where TWs exist. In a TW/SW-generator TWs exist everywhere where SWs exist. In the next section we use this machinery to explore TW vs SW selection in models with a cubic and sigmoidal firing rate nonlinearity.

Cubic and sigmoidal firing rate function
The general case of independent γ 2 and γ 3 can be emulated by choosing a cubic nonlinear- we have At the bifurcation point the transcendental equation γ c = f ′ (u) reduces the number of independent parameters by one. For example we may use this equation to write h = h(β) and then express both γ 2 = f ′′ (u)/2 and γ 3 = f ′′′ (u)/6 as functions of just the gain β. The relationship between h and β (for a fixed γ c = 1) is plotted in Fig. 7, right. The parameter portrait on the two branches is identical because the axis of symmetry of (γ 2 , γ 3 )-space maps onto the middle of the graph h = 0. Note that for β < 4γ c , there is no corresponding value of h which solves the condition to be at the bifurcation point γ c .

Benjamin-Feir instability
We have determined some aspects of the nonlinear behaviour of the full system from the reduced amplitude equations (47). Basically, the stable solution of the system (47) is composed of one (TW) or two (SW) periodic wavetrains moving with speed determined by the linear instability. However, the diffusion term present in the full space-dependent amplitude equations (44) alters somewhat the nature of solutions. The generic instability associated with such diffusion, in an infinite 1D system, is an instability to long-wave sideband perturbations due to the influence of nearby excited modes on the primary mode. This is the so-called Benjamin-Feir-Eckhaus instability. We will describe it briefly here, for more detail see for example [35].
Let us consider a travelling wave having a wavenumber k = k c + ǫq, where q ∈ (− √ δ, √ δ), so that k describes the unstable set of modes for γ 1 = γ c + ǫ 2 δ > γ c . The solution to the spatially extended system is A = r 1 e ie 1 τ +iqξ , e 1 = a i + b i r 2 − d i q 2 . By adding small perturbations to both the magnitude and phase of A we find the condition for One can see that the relation between the self-coupling and the diffusion constants b and d determines the finite width of a set of modes q with centre the primary mode k c , which are stable to sideband perturbations. This is defined by the condition A solution with wavenumber outside this band will break up in favour of a wave with wavenumber from inside the stable band. In Fig. 7, left, we show the Fourier transform of a wave solution to the model from Section 3.1 undergoing a Benjamin-Feir-Eckhaus instability. If the condition holds, there is no stable wavenumber and several different regimes of spatiotemporal chaos can develop depending on parameters as described in [47]. This is commonly referred to as the Benjamin-Feir instability. For the mean-field Ginzburg-Landau equations it is known that if the control parameter is very close to the bifurcation value, the condition (53) also determines the point of destabilisation of the standing wave solution [30]. Note, however, that the validity of this result for SWs is limited by more complicated re-stabilisation boundaries at slightly larger values of γ 1 [46]. We will not consider this here. An example of a Benjamin-Feir instability will be presented later in Section 6 for a model with mexican hat connectivity and spike frequency adaptation (Fig. 11).

Examples revisited
We apply the theory developed so far to determine the regions of TW-SW selection for the model with axonal delays we looked at in Section 3.1. Again, we will keep α = 1 and work with v and the nonlinear parameters β (neuronal gain) and h (threshold). Using the remarks made in Section 5.1.1 we plot the TW-SW regions in terms of β only (see Similarly if we follow the critical curve of the model with dendritic delays with a fixed synaptic site z 0 (6) (Fig. 5, left) we find that the only stable solutions for every z 0 are travelling waves. The case of correlated synapses is more interesting and we consider first a cubic nonlinearity. As we vary the synaptic contact parameter φ we find that there is a value at which behaviour switches between the TW-and TW/SW-generator (i.e. wheń f =ǵ). We can track this point in a second parameter, for example the axonal conduction velocity v. We plot this diagram on the left of Fig. 8. Consistent with other models we obtain synchronised homogeneous oscillations for large delay and TWs for small delays.
Intermediate between these two extremes we find a TW/SW-generator. However, here the sizes of regions of stable SWs are very small. A sigmoidal nonlinearity stretches the parameter space in a way which makes stable SWs practically unobtainable for this model (see Fig. 8, right).

Spike frequency adaptation
In real cortical tissues there is an abundance of metabolic processes whose combined effect is to modulate neuronal response. It is convenient to think of these processes in terms of local feedback mechanisms that modulate synaptic currents. Here we consider a simple model of so-called spike frequency adaptation (SFA), previously discussed in [42]: where γ a > 0 is the strength of coupling to the adaptive field a and w a (x) = exp(−|x|/σ)/2σ describes the spread of negative feedback. By solving for a with the Green's function η a (t) = e −t Θ(t) we can reduce the model to a single equation of the form where K a (x, t) = −η a (t)w a (x) and the coefficient γ a has been incorporated into the nonlinearity f a . Using this notation it is apparent that both our linear and weakly nonlinear analysis can be readily extended. The dispersion relation for a model with SFA can be γ an = f (n) a (u). Then, the coefficients of the mean-field Ginzburg-Landau equation are calculated as with E mn i = E i (nk c , imω c ), and all quantities above evaluated at the critical point where Consider the model with axonal delay given by K(x, t) = (1 − |x|) e −|x| δ(t − |x|/v) in the context of linear adaptation (f a (u) = γ a u). The linear instability portrait with respect to adaptation strength γ a is plotted in Fig. 9, left for v = 1 and σ = 0 (representing local feedback). The case with σ = 0 (non-local feedback) is qualitatively similar. With increasing γ a the main effect, on all critical curves (such as those seen in Fig. 3), is to move them toward γ 1 = 0. For γ a larger than some critical value γ c a the homogeneous steady state becomes unstable. In this case the majority of wavemodes are excited and solutions grow to infinity. For w 0 < 0 (and the physically sensible choice γ 1 > 0) we see that a dynamic instability can occur for sufficiently large γ a < γ c a . This statement is true for all v. Hence, dynamic instabilities are possible in a model with short range excitation and long range inhibition provided there is sufficiently strong adaptive feedback. Indeed, this observation has previously been made by Hansel and Sompolinsky [48]. In fact, Curtu and Ermentrout [29] discovered homogenous oscillations, travelling waves and standing waves in the neighbourhood of a Takens-Bogdanov (TB) bifurcation in a model with Mexican hat connectivity and feedback lacking space-dependent delays (v −1 = 0). The dependence of this TB point on v and γ a in a model with standard wizard hat connectivity is shown in Fig. 10. It first appears for v ≈ 6.27 at γ a ≈ 2.51. In contrast to previous models we have studied, the regions of stable SWs are large for the general cubic case and exist even for the sigmoidal firing rate (see Fig. 9 right). Another new feature in the presence of SFA is the appearance of parameter regimes at large v for which TWs and SWs are Benjamin-Feir unstable. Solutions of all models without SFA that we studied were verified to be Benjamin-Feir stable. An example of such instability of SW in a model with SFA is shown  in Fig. 11. However, we were not able to observe chaotic solutions as one might expect from the properties of the system's normal form. The questionas to why the transition settles to modulated ordered solutions remains open.
Let us briefly consider an example of nonlinear adaptation with the choice f a = γ a f in (55). In the case of purely local feedback (σ = 0) the linear instability portrait for the model of Section 3.1 does not change qualitatively when γ a is increased from zero, although all instability curves are moved towards zero for both w 0 > 0 and w 0 < 0. Unlike the linear adaptation model they do not hit γ 1 = 0 but instead tend assymptotically to it. Thus there is no "unstable regime" where all modes are excited. A region of TW/SWgenerator appears for medium speeds, shown in Fig. 10, though the SW bands in (γ 2 , γ 3 ) space are very small. However, they significantly widen for nonlocal feedback and can also be observed for the case of sigmoidal firing rates.

Discussion
In this paper we have studied pattern formation in a broad class of neural field models.
These models provide minimal descriptions of neural tissue with biological features that include synaptic and dendritic processing, the effects of axonal delays, and spike frequency adaptation. Importantly we have written them using the language of integral equations  wavenumbers in its neighbourhood start to grow (a). After some time new nonlinear stability conditions come into play which, depending on the initial condition (respectively k = 1.0395 and k = k c ), might let only a single wavenumber survive (b) or stabilise a more complicated configuration (c). On the Fourier space plots one can also see the appearance of a spatial resonance.
Here, wavenumbers which are multiples of the dominant ones also become partially excited. and spatio-temporal kernels. By pursuing a systematic multiple scales approach for deriving amplitude equations in an integral framework we have been able to recover a number of known results about pattern formation in neural fields, and more importantly been able to treat the biologically important case of space-dependent delays for the first time.
These delays arise naturally in models with axons and dendrites and are treated from a mathematical perspective by choosing an appropriate spatio-temporal kernel K(x, t) for the integral model. In fact because our approach does not depend upon detailed properties of K(x, t) the techniques we have developed can also cope with time varying synaptic connections, such as would be the case if a network were undergoing some form of learning.
The derivation of the appropriate amplitude equations for group velocity of O(1) has allowed us to treat the selection problem for standing vs travelling waves in models with both axonal [18] and dendritic delays [17], and to also recover and extend the original work of Curtu and Ermentrout [29] on linear spike frequency adaptation. Although the latter authors have used similar methods to ours to find instances of standing waves, one point worth making for all the models studied in this paper is that the windows of parameter space that support stable standing waves are, in fact, very small. Our analysis has enabled us to first delimit such regions and second to consider how they may be maximized. For example in the case of nonlinear spike frequency adaptation we find that increasing the spatial extent of modulating currents is one such mechanism for promoting the robustness of stable standing waves under parameter variation.
A number of extensions of the work in this paper readily suggest themselves, including i) the study of two-population models [1], ii) the effects of heterogeneity [49,28], iii) the treatment of dynamic thresholds [50], iv) the effects of distributed axonal speeds [51] and v) the extension to two spatial dimensions. These are topics of ongoing study and will be reported on elsewhere.