English  |  Français 

Special Seminar, 21 July 2015

Tuesday, July 2015, 21st
"New Ideas in Theoretical Neuroscience"

 

The symposium will take place at Salle Langevin, 29 rue d'Ulm, starting at 14:00.

 14:00 - 14:45  'Reading the mind of the worm'
                 Saul Kato

 14:45 - 15:30  'Sequence generating recurrent neural
                 networks: a novel view of cortex, thalamus,
                 and the basal ganglia'

                 Sean Escola

 15:30 - 15:45   Coffee break

 15:45 - 16:30  'Contextual modulation of gamma rhythms in
                 inhibition stabilized cortical networks'

                 Yashar Ahmadian


 Reading the mind of the worm
 Saul Kato
 EMBO Fellow, Zimmer Group, IMP, Vienna

 If we could read the activity of every neuron of an
 organism’s brain at the same time, could we decipher its
 thoughts? More tangibly, we could be in a position to ask
 how holistic behavior arises from dynamic brain activity.
 While individual motor actions have been correlated with
 dynamical activities of neuronal sub-networks, it remains
 unknown how the brain coordinates these activities to
 produce coherent action sequences. Here, we perform
 brain-wide single-cell resolution imaging in Caenorhabditis
 elegans and find that the neural population exhibits a
 widely shared, low-dimensional signal that evolves
 cyclically on an attractor-like manifold, the geometry of
 which defines the assembly of commands into action
 sequences, including decisions between alternative actions.
 This study establishes, for the first time in any animal, a
 continuous real-time mapping between neural dynamics and
 behavioral dynamics on a single-trial basis.



 Sequence generating recurrent neural networks: a novel view
 of cortex, thalamus, and the basal ganglia

 Sean Escola
 Columbia University, New York

 One view of all complex cognitions and behaviors is that
 they are sequences of simpler, more stereotyped components.
 For example, peeling a banana involves combining a set of
 simple reaches and grasps. In this view, sequence generation
 is a fundamental computational task that the brain must
 perform. Additionally, the same neural hardware may be
 reused for each component of a sequence: evidence from
 primary motor cortex reveals that the same network of
 neurons can drive multiple different kinds of reaches. If we
 map each component of a sequence onto an activity "state" of
 a neural network, then sequence generation poses the
 following questions: 1) how can networks maintain the
 current state, 2) how can they switch between states at
 appropriate times, and 3) can the computational functions
 underlying sequence generation be mapped onto specific brain
 structures? In this talk, I will present a multistate
 recurrent neural network (RNN) model that can perform the
 computation of sequence generation and that addresses each
 of these three questions. The model architecture maps nicely
 onto the cortex-basal ganglia-thalamus loop, thus offering
 novel insights into the functions of these brain structures,
 which have long been known to play a role in sequencing.
 Interestingly, this specific architecture has recently been
 proposed in the machine learning community as the
 "multiplicative RNN", and has been shown to have certain
 advantages over standard RNNs which will be discussed. Thus,
 it may be proposed that brain implements a multiplicative
 RNN as a way to harness these computational advantages.



 Contextual modulation of gamma rhythms in inhibition
 stabilized cortical networks

 Yashar Ahmadian
 University of Oregon, Eugene; Columbia University, New York

 Cortical networks feature strong recurrent excitation,
 posing them near potential instability. By and large, models
 of cortical dynamics have relied on single neuronal
 saturation to overcome such instability. However, throughout
 the cortical dynamic range, neurons' activity tends to
 remain well below their saturation levels, and
 correspondingly their empirically measured input-output
 functions remain convex and supralinear. Such expansive
 nonlinearities at first appear to aggravate the problem of
 stability. Nevertheless we have recently shown that strong
 recurrent inhibition is sufficient to stabilize cortical
 networks against runaway excitation, without relying on
 single neuronal saturation (Ahmadian et al. 2013, Rubin et
 al. 2015). Moreover, as a consequence, such Stabilized
 Supralinear Networks (SSN) provide a robust and parsimonious
 mechanistic explanation for a plethora of contextual
 modulation phenomena observed across sensory cortical areas.
 These include surround suppression and divisive
 normalization, recently dubbed a canonical brain computation
 (Carandini & Heeger, 2011).

 In this talk I will first review these published results.
 In the second half, I will focus on ongoing work using the
 SSN to model aspects of time-dependent cortical dynamics.
 Gamma rhythms are a robust feature of cortical dynamics, and
 have been hypothesized to play a central role in various
 cognitive tasks. Gamma rhythm characteristics such as their
 power and peak frequency, however, exhibit strong
 dependencies on stimulus and contextual parameters (it has
 in turn been argued that such dependencies may invalidate
 some hypothesized computational functions for gamma). I will
 describe how SSN is able to robustly account for such
 modulations in gamma characteristics. In particular, I will
 show how the model explains the particular dependence of
 gamma peak frequency on local stimulus contrast and stimulus
 size observed in the visual cortex. Time allowing, I will
 also elaborate on two possible mechanisms for attentional
 modulation of rates in SSN, which lead to opposite effects
 on gamma power, as observed, respectively, in V1 vs. higher
 visual cortical areas.

INSERM ENS