1.2. Quadratic Integrate-and-Fire (QIF) Spiking Neuron Model

The QIF neuron model is a spiking point neuron model that simplifies the dynamics of a neuron to the evolution of a single, dimensionless state variable \(v_i\):

\[\tau \dot v_i = v_i^2 + \mu_i(t),\]

where \(\tau\) is a global time constant and \(\mu_i\) is a lumped representation of all inputs that arrive at neuron \(i\) at time \(t\). Any time \(v_i \geq v_{peak}\), a spike is counted and the reset condition \(v_i \rightarrow v_{reset}\) is applied. This introduces a discontinuity around the spike event to the dynamics of \(v_i\), and makes the state variable similar to the membrane potential of a neuron close to the soma. Remember, though, that \(v_i\) is a dimensionless state-variable and thus can represent the membrane potential of a neuron only up to an undefined scaling constant. For a detailed description of the QIF neuron model, its dynamic regimes, and a derivation of this equation from more complex models of neuronal dynamics, see [1] and [2].

Here, we will document the pre-implemented QIF neuron model available in RectiPy and how to use it as a base neuron model for an RNN layer. Currently, two different versions of the QIF model are implemented in RectiPy, both of which we will introduce below.

1.2.1. QIF neuron with synaptic dynamics

The first version of the QIF neuron model available in RectiPy defines the input variable as follows

\[ \begin{align}\begin{aligned}\mu_i(t) &= \eta_i + I_i(t) + \tau s_i^{in},\\\tau_s \dot s_i &= -\frac{s_i}{\tau_s} + \delta(v_i - v_{peak}).\end{aligned}\end{align} \]

This definition of the input \(\mu_i\) allows for the usage of the QIF neuron as part of an RNN layer. In the first equation, \(\eta_i\) represents a neuron-specific excitability, \(I_i\) represents an extrinsic input entering neuron \(i\) at time \(t\) (could be the input from a preceding layer), and \(s_i^{in}\) represents the combined input that this neuron receives from other QIF neurons. The latter becomes more explicit in the second equation, which governs the dynamics of the synaptic output of the \(i^{th}\) neuron. Whereas the first term is a simple leakage term with synaptic integration time constant \(\tau_s\), the second term is 1 whenever the neuron spikes and 0 otherwise (\(\delta\) is the Dirac delta function). Connections between different QIF neurons can be established by projecting \(s_i\) to the variable \(s_i^{in}\) of other neurons in the network. We demonstrate this below.

References

1.2.1.1. Step 1: initialize a rectipy.Network instance

We will start out by creating a random coupling matrix and implementing an RNN model of coupled QIF neurons.

from rectipy import Network
import numpy as np

# define network parameters
node = "neuron_model_templates.spiking_neurons.qif.qif"
N = 5
J = np.random.randn(N, N)*2.0

# initialize network
net = Network(dt=1e-3)

# add QIF population to network
net.add_diffeq_node("qif", node, weights=J, source_var="s", target_var="s_in", input_var="I_ext", output_var="s",
                    op="qif_op", spike_def="v", spike_var="spike")

The above code implements a network of \(N = 5\) randomly coupled QIF neurons. We couple the variables \(s_i\) to the variables \(s_i^{in}\) via the coupling strengths given by \(J_{ij}\). Mathematically speaking, the above code implements \(s_i^{in} = \sum_{j=1}^N J_{ij} s_j\). In addition, we define the variable \(I_{ext}\) as input variable of the RNN and \(s\) as its output variable.

1.2.1.2. Step 2: Simulate the RNN dynamics

Now, lets examine the network dynamics by integrating the evolution equations over 10000 steps, using the integration step-size of dt = 1e-3 as defined above.

# define network input
steps = 10000
inp = np.zeros((steps, N)) + 10.0

# perform numerical simulation
obs = net.run(inputs=inp, sampling_steps=10)

We created a timeseries of constant input and fed that input to the input variable of the RNN at each integration step. Let’s have a look at the resulting network dynamics.

from matplotlib.pyplot import show, legend

obs.plot("out")
legend([f"s_{i}" for i in range(N)])
show()

As can be seen, the different QIF neurons generated different synaptic outputs even though they received the same extrinsic input, due to the random coupling and the RNN dynamics emerging from that.

1.2.2. QIF neuron with spike-frequency adaptation

The second QIF neuron model we provide with RectiPy incorporates a spike-frequency adaptation (SFA) mechanism. Specifically, the input \(\mu_i\) is defined as

\[\mu_i(t) = \eta_i + I_i(t) - x_i + \tau s_i^{in},\]

where \(x_i\) represents a neuron-specific SFA variable, the dynamics of which are given by

\[\dot x_i = -\frac{x_i}{\tau_x} + \alpha \delta(v_i - v_{peak}),\]

with adaptation time constant \(\tau_x\) and adaptation strength \(\alpha\). The effects of SFA on the macroscopic dynamics of a QIF population are described in detail in [3]. Here, we will show how they affect the RNN dynamics in a small QIF network.

# remove old QIF population from network
net.pop_node("qif")

# add QIF-SFA population to network
node = "neuron_model_templates.spiking_neurons.qif.qif_sfa"
net.add_diffeq_node("qif_sfa", node, weights=J, source_var="s", target_var="s_in", input_var="I_ext", output_var="s",
                    op="qif_sfa_op", spike_def="v", spike_var="spike")

# perform numerical simulation
obs = net.run(inputs=inp, sampling_steps=10)

# visualize the network dynamics
obs.plot("out")
legend([f"s_{i}" for i in range(N)])
show()

As can be seen, the overall spiking activity in the network was clearly reduced by adding the SFA term (by default \(\alpha = 1.0\) is used).

Gallery generated by Sphinx-Gallery