Maneesh Sahani, University College London

Whence Bayes? How and why networks learn to perform probabilistic inference

Human and animal behaviour frequently resembles that of an ideal observer performing optimal inference according to the Bayesian calculus of probabilities. How is it that networks of neurons come to so closely implement abstract rules? Much neural theory has focused on characterising neurally-plausible representations of uncertainty, and on how these may form the basis of computation. Relatively little has asked how local learning rules may be arranged to achieve such representations and computations. I shall describe new theoretical work that aims to address this gap, by considering how computation and inference may be learnt using a distributed distributional code (DDC) in which expected values implicitly carry information about uncertainty. This neurally-inspired achitecture can easily learn to implement inference in simple sensory settings and, remarkably, may potentially outperform modern machine learning approaches to unsupervised learning.

Additional Information

Workshop talk; registration is necessary

Organized by

GRK 1589

Go back