Noa Malem, BCCN Berlin / TU Berlin
Bayesian methods for network inference with spiking neurons
One of the pivotal challenges in statistical neuroscience is the inference of connectivity between neurons from the observed activity of a network, which can be seen as an inverse problem. In this work we use a statistical generative model to to describe the statistics of a neural network activity reduced to spike trains, and perform an inference problem for the connectivity using this model. The model used in this work is the kinetic Ising model that has been shown to accurately capture correlations between neurons in a spiking network. We further extend this model by assuming sparse connectivity in the network. The general problem of connectivity inference using the kinetic Ising model has been approached before by approximation methods such as mean fields analysis and the inversion of the TAP equations. The sparsity assumption has previously been included by introducing regularization terms to the optimization problem. Here, we perform the inference in a Bayesian setting which allows us to include the sparsity assumption as a Spike and Slab prior distribution over the connections in the network. The main body of this work is the development and assessment of different inference methods. We start by approximating the Ising likelihood with a Probit and a Gaussian model and applying the Expectation Propagation (EP) algorithm. We further use a recent augmentation method which employs the Polyá-Gamma distribution and allows us to develop an exact Gibbs Sampler and apply the Expectation Maximization (EM). We first test the different methods on neural activity simulated by the kinetic Ising model with known connectivity. In this setting, we can compare the inferred connectivity with the connectivity used to generated the data. Next we perform the inference on a realistic simulation of cortical network and on a dataset recorded from the buccal ganglia of Aplysia and investigate the abilities of the different methods to capture the moments of the data. While all of the methods perform well, the Gibbs Sampler is the most accurate, though it is the computationally slowest method. We conclude by suggesting to combine the fast Linear EP method with the accurate Gibbs sampler in the following way: first, the model hyperparameters are optimized with the Linear EP. Then, the connectivity is inferred with the Gibbs sampler. connectivity.
Additional Information
Thesis defence in the international Master program Computational Neuroscience
Organized by
Manfred Opper / Robert Martin