Event Abstract

Bayesian Population Decoding of Spiking Neurons

Sequences of action potentials are believed to be the basis for information transmission in populations of nerve cells. In these sequences, information about sensory stimuli, behavioral variables or endogenous signals is encoded. Many sensory inputs change continuously in time and have variations across a large range of different time scales. Action potentials, however, are discrete events in time, and depend on the stimulus in a stochastic manner. Most often, simple linear summations over spikes such as peri-stimulus time histograms or linear filtering schemes are used as read-out of spiking neural populations. How information about continuous, time-varying stimuli can be extracted from the temporal information encoded in the sequence of interspike intervals, however, is not well understood.

In this work we present an approximate Bayesian decoding scheme for extracting information about a stimulus -or other continuous-valued inputs- from spike trains. We use this decoding rule to analyse in how far encoding parameters affect the reconstruction and how to adapt them to the stimulus statistics in a favorable way.
We describe the noisy generation of spikes in the neural population in response to a continuous stimuli using a probabilistic neuron model. Specifically, we use a leaky integrate-and-fire neuron model which is driven by a Gaussian process. The neural noise is modeled by a stochastic threshold which is set to a new random value after each spike. The description of spike generation as a renewal process is included in this model class as a special case in the limit of an infinitely large time constant. In particular, when the exponential distribution is chosen for the threshold noise, the spike generation process equals an inhomogeneous Poisson process. Thus, it is straightforward to gradually adjust the reliability of the spike generation process in this model class.
Thus, we have specified a probabilistic model for both the stimulus ensemble and the neural encoding. This allows us to decode the stimulus from the neural responses in a Bayesian framework. In particular, we seek to calculate the posterior probability distribution over the stimulus given an observed sequence of spikes. Calculating the posterior in high dimensions is a challenging problem for which we are applying approximate inference techniques.
Our results show that the approximate Bayesian decoding algorithm is able to reconstruct a time varying stimulus. We also investigate the scaling of the reconstruction error with increasing number of observations. The Bayesian decoding algorithm does not only estimate the posterior mean, but also the posterior variance, a measure of residual uncertainty over the stimulus. Such uncertainty information is not available to simple regression decoders. Having access to the posterior uncertainty also enables one to ask questions about the reliability and robustness of the neural code. Additionally, it allows one to ask the question of whether encoding parameters are tuned to the statistics of the input in a manner that reduces the residual uncertainty about the stimulus.

Conference: Computational and systems neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009.

Presentation Type: Poster Presentation

Topic: Poster Presentations

Citation: (2009). Bayesian Population Decoding of Spiking Neurons. Front. Syst. Neurosci. Conference Abstract: Computational and systems neuroscience 2009. doi: 10.3389/conf.neuro.06.2009.03.026

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 30 Jan 2009; Published Online: 30 Jan 2009.