English  |  Français 

Neural networks implementing hierarchical probabilistic models

Principal investigator(s): Sophie Denève, Timm Lochmann, Matthew Chalk and Udo Ernst

A striking analogy exists between the Bayesian networks (graphical models) that are used in machine learning, and recurrent neural networks, that also contain nodes (neurons or neural population) links (axons and synapses) and multidirectional propagation of messages (spikes in the case to neural network, beliefs in the case of Bayesian network). In particular, we can make a parallel between the factorization of the joint probability distribution, the nodes in the graphical model that represents the statistical structure of the perceptual and motor environment, and the modular structure of the brain that implement this graphical model. Our major objective is neurons as building blocks in a new theory of cortical computations, where neural networks implement an underlying, hierarchical statistical model, and where the multidirectional flow of information within cortical networks is interpreted as a propagation of beliefs allowing each neuron to compute the probability of its hypothesis to be true given evidence received in the entire brain. More generally, we propose to show that networks of biophysical spiking neurons approximate Bayesian inference by a local message passing algorithm called belief propagation in a corresponding Bayesian network.

Publications

Lochmann, T. and Deneve, S., Neural processing as causal inference, Current Opinion in Neurobiology, 21(5), 774-81 (2011).

Lochmann, T. and Deneve, S., Optimal cue combination predict contextual effects on sensory neural responses, Sensory Cue Integration, (2011).

INSERM ENS