Review 2
-
Sparse balance: excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights by Ramin Khajeh, Francesco Fumarola, Larry F. Abbott
- Citation: Khajeh, Ramin, Francesco Fumarola, and L. F. Abbott. “Sparse balance: excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights.” bioRxiv (2021).
- DISCLAIMER I am very interested in this part of computational neuroscience, but I’m a bit of a beginner to this topic. Some of the stuff here goes over my head, esp. the tractability of the gamma distribution in setting synaptic weights. If I got anything horribly wrong or you have any feedback please email me at zladd@berkeley.edu. Later I plan to add a feedback section using disqus at the botttom of each blog.
- Intro
- Neurons recieve a lot of excitatory input so there must be inhbitory balance. Some mechanism are cited as responsible for this:
- Recurrent excitation
- Recurrent inhibition
- Feedfoward excitation
- No evidence that feedforward excitaiton is particularly strong in these models.
- This is definitely new to me.
- States that
- synaptic weight distributions have variance around $\frac{1}{\sqrt{k}}$ where $K$ is node in-degree.
- order of inhibitory input is around $\sqrt(K)$
- and it’s generally cancelled by an excitatory input of equal magnitude, but they want to avoid this.
- Suggests that without feedforward exc. input, there will be low firing rate but moves this evidence in argument for a sparser network, not a dense netowrk with low firing rate neurons.
- Getting rid of feedforward exc. means that synaptic distirbutions lose variance, so they suggest remedying this by making synaptic strength vary with a mean of $\frac{1}{\sqrt{K}}$ but then vary with order $\frac{1}{\sqrt{K}}$ this means total variance is now $\frac{\sqrt{K}}{\sqrt{K}}$ again (prev. it was $\frac{1}{\sqrt{k}}$)
- One part of this that I don’t understand is a formal mathematical definition for a synaptic weight distribution. I think that for a more experienced reader, they will know what this means. My understanding is that this distribution will be a vector like $\bf{x} \in \mathbb{R^{K}}$ and this vector with the excitory input would follow something like $\bf{x_{1:\sqrt{K}}} \sim \mathcal{N}(\frac{1}{\sqrt{K}},1)$ (standard model –> variance == 1?) and the inihbitory component would be $\bf{x_{1:\sqrt{K}}} \sim \mathcal{N}(-\frac{1}{\sqrt{K}},1)$.
- there are a few things I here I would want to ask:
- We have only accounted for $ 2 \times \sqrt{K}$ of the $K$ weights. Are the rest of the synaptic weights just 0?
- honestly, I’m missing something here. I think all weights are accounted for, I just don’t know how.
- Do these weights have unit variance?
- We have only accounted for $ 2 \times \sqrt{K}$ of the $K$ weights. Are the rest of the synaptic weights just 0?
- there are a few things I here I would want to ask:
- Neurons recieve a lot of excitatory input so there must be inhbitory balance. Some mechanism are cited as responsible for this:
- The model
- This part starts to clear up the questions above
- They state that
Sparse balance: excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights. (Khajeh 2021)
Networks have currents $x_i$ for $i = 1 … N$ and firing rates $\phi(x_i)$ that obey \begin{equation} \Gamma_x \frac{dx_i}{dt} = -x_i - \sum\limits_j^N J_{ij}\phi(x_j) + I_0 \end{equation} where $\phi$ is a nonlinear function and $J_{ij} \geq 0$
- I am not sure what $J_{ij}$ represents now. *note: it get’s explained a few sentences later. – they are i.i.d. weights.
- Besides that, this function does a good job of clearly defining synaptic integration.
- Considers tangenet vs. exponential nonlinear activation functions for $\phi$.
- Simulation results
- When excitatory input is set to $I_0 \sim \sqrt{K}$ the network has either sparse and high firing rate reponses (high variance), or dense and low firing rate responses (low variance).
- this lines up nicely with what was introduced in the beginning and has a very clear inuitive interpretation. Nice!
- Then they dig in to some detailed numerical explanations for this effect.
- Figure 1 is very clear
- I was surprised to see that in panel G, the feedfoward input only had a small impact on the fraction of active neurons.
- Sparse E/I balance lead to non-gaussian 0-skewed distribution of synaptic weights
- The authors state that these results represent a robust distribution of synaptic weights desipte not having any excitatory input.
- When excitatory input is set to $I_0 \sim \sqrt{K}$ the network has either sparse and high firing rate reponses (high variance), or dense and low firing rate responses (low variance).
- Analysis of Sparse Balance Networks
- here the authors explore high variance, low exc. input, sparse networks further using heavside non-linear response function.
- had to look this one up, but it’s basically a piecewise activaiton function where the gradient is the dirac delta function
- weights drawn from $\text{gamma}(k, \theta)$
- Figure 3A would benefit from text titles on the figure, not just in the caption (at least for a reader like me lol I suck at reading figs).
- What I got out of Fig. 3 is that heavside function lead to lower fraction of active neurons compared to tanh and exp. activation functions and thus a sparser netowrk. But that it also had a non-gaussian distribution of weights under the sparse condition.
- Interesting point about having a larger $K$ or shape parameter for gamma, one might expect that the gamma function approximate gaussian function. But this is not the case with $\eta \sim \text{gamma}(\alpha, \theta)$.
- I wonder if I understood that right?
- If feedforward bias is small –> large synaptic variance required for robust fluctuation.
- here the authors explore high variance, low exc. input, sparse networks further using heavside non-linear response function.
- Sparse Activity Arises from Network Dynamics
- They use population averaged autocorrelation function to find out how much the weight distribution is affected by the previous weight distribution at some lag.
- Autocorrelation decays faster for larger K (fig. 4).
Sparse balance: excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights. (Khajeh 2021)
it is the dynamics of the recurrent synaptic inputs, not their size, that leads to sparse activity at large K
- Mean Field Analysis
- Introduces a variable $m$ which is the mean-field approximation for $\bar{\phi}$
- Based on fuctions 7 and 11 (I think) they define closed form equation for $m$ in terms of gamma dist. shape parameter $\alpha$ as
\begin{equation}
\alpha = \frac{mJ_0^2\sqrt(m)}{g^2}
\end{equation}
- $g^2$ comes from gamma dist. scale parameter.
- considers several cases of computing autocorrelation of $\eta$ based on different sizes of $K$ and decorrealtion rate $\beta$.
- When $K$ is small, then $eta$ follows gamma distribution $\text{gamma}(\alpha, \theta)$ and then they can solve closed form mean field calcuation using integral.
- even at $K ~ 10^3$ this holds.
- Then there’s a more complicated closed form calculation for $var(\eta)$ that I’m gonna skip over for now.
- involves decomposing the variance into quenched and time dependent.
- not very familiar with this decomposition. Maybe if I look into mean field decomp. more I will see.
- Fig. 5 just shows that this recover of gamma distribution parameters charcterizes weights and input parameter well. Again, clear and nice figure
- Sparse balance in E-I network
- shows that network properties shown for I network model also apply for E/I model, such as sparsity (frac. of neurons active) and firing rates.
- Discussion
- Shows interesting network properties demonstrated in this anaylsis as a result of making feedforward excitation smaller.
- Broadens out implications to real world instances where E/I ratio is similarly composed.
- Interesting to point out this phenomenon in relating autocorrelation and network sparse responses seems to be a general effect.
- Definitely incresed my interested in mean field theory.
- Here is what I think is really the most fascinating takeway: “The large degree of variability in the synapses could route stimulus information along particular paths across network neurons”
- A reasonable claim that it is not just feedforward input that influences selective network responses but also recurrent synaptic input.
- The number of recurrent connections in the brain would support this idea. If there is so many recurrent networks, they must be doing something.
- Ideas here build on some early work by M. Jordan cited in The Computational Brain by Terrence Sejnowski on using recurrent networks to identify shapes and the expressive power of non-feedforward connections.
- Granted, I haven’t actually read this 1998 work, just read the watered down example in Dr. Sejnowski’s book.