English  |  Français 

Past Events










Title / Abstract


6 Feb 2018 Victoria Booth University of Michigan Gregory Dumont
Cholinergic modulation of pattern formation in excitatory-inhibitory neural networks
Abstract: The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network.  In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present.  The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioral state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties.  At the cellular level, ACh significantly alters neural excitability and firing properties as measured by the phase response curve (PRC) in a manner that has been shown to alter the propensity for network synchronization. In this talk, I'll discuss our investigations into the interaction of cellular ACh modulation and network connectivity structure in excitatory and inhibitory neural networks. Our results analyse the influence of this interaction on determining patio-temporal network activity patterns and potential  unctional effects of network activity pattern modulation.
21 Nov 2017 Zachary Kilpatrick University of Colorado  Ines Guerreiro
Evidence accumulation in changing environments: The price of optimality
Abstract: To make decisions in a constantly changing world, organisms must account for environmental volatility and discount old information when making decisions based on such accumulated evidence. We introduce Bayesian inference models of decision making, and derive an ideal observer model for inferring the present state of the environment along with the environment's rate of change. Such models can be derived when the evidence stream is persistent and noisy (e.g., random dot displays) or when evidence is pulsatile (e.g., clicks provided to the ears). Moment closure allows us to obtain a low-dimensional system that performs comparable inference. These computations can be implemented by a neural network model whose connections are updated according to an activity-dependent plasticity rule. We discuss the predictions of our model in light of recent experimental data exploring evidence accumulation strategies implemented by humans and rats performing decision-making tasks in changing environments. The model can be extended in a number of ways to incorporate multiple streams of evidence, such as change point signals.
20 Oct 2017 Mehrdad Jazayeri MIT Ivan Gordeli
Regulation of cortical dynamics by internal models.
Theoretical considerations and psychophysical studies of sensorimotor integration describe the dynamic regulation of behavior in terms of three computational building blocks: a controller (i.e., inverse model), a predictor (i.e., forward model) and a state estimator (i.e , Bayesian estimator). However, due to the complexity and concurrent engagement of these computations during natural movements, direct evidence that the nervous system establishes inverse and forward models remains elusive. We tackled this problem by designing a sensorimotor timing task in  which the function of the inverse and forward models was simple and their deployment was segregated in time. Recording from the frontal cortex of monkeys performing this task revealed the differential contribution of the inverse and forward models to the regulation of the underlying neural dynamics. Our findings provide direct evidence that the nervous system establishes task-relevant internal models to perform Bayesian sensorimotor integration.
3 Oct 2017 Frances Chance Sandia National Laboratories  Mirjana Maras
Can the sum of the parts be greater than the whole?  Neural-inspired computation through study of canonical circuits.
Abstract: Neuroscience is entering an era of big data that will greatly advance our understanding of neural circuitry and what neural computations underlie brain function and behavior.  At the same time, neural-inspired computing is also experiencing a renaissance, but it has not fully exploited the potential of neuroscience big data.  My research at Sandia has focused on studying “canonical” circuits of the nervous system, for example motion-sensitivity in the retina, and exploring how furthering our understanding of these neural circuits can lead to new advances in engineered systems.
20 Jun 2017 Carmen Canavier Louisiana State University Pantelis Leptourgos
Pacemaking and Bursting in Midbrain Dopamine Neurons
Midbrain dopamine neurons are implicated in many disorders of dopamin signaling, including addiction, schizophrenia and Parkinson’s disease.In vivo, they exhibit two primary activity patterns, tonic (singlespike) firing and phasic bursting The spontaneous tonic firing ofthese neurons plays a fundamental role in dopaminergic signaling by setting the basal level of dopaminergic tone in the striatum andsetting the gain for phasic reward signaling. Visualization of the 3Dstructure of the axon initial segment (AIS) and the somatodendriticdomain of mouse dopaminergic neurons, which were previously recordedin vivo, revealed a positive correlation of the firing rate with bothproximity and size of the AIS. Computational modeling showed that thesize of the AIS is the major causal determinant of the tonic firingrate, by virtue of the higher intrinsic frequency of the isolated AIS,whereas position correlates with firing rate only due to a correlationbetween size and position. Thus morphology plays a critical role insetting the basal tonic firing rate. Transitions from tonic to phasicsignaling (bursting) are modulated by two potassium currents, thesmall conductance (SK) calcium-activated potassium current and theATP-mediated potassium (K-ATP) current, which have opposite effects onbursting. Blocking the SK current de-regularizes firing and increasesbursting, whereas silencing K-ATP channels greatly reduces burstfiring in a medial subpopulation of these neurons. This result seemsparadoxical in light of the putative indirect Ca2+-dependence of theK-ATP channel: why would two calcium-dependent potassium currents have diametrically opposed effects on bursting? In order to address thisquestion, we used a computational model to show that the faster time scale of the SK current prevents plateaus, whereas the presumed slower time scale of the K-ATP current enables the pauses between bursts.
6 Jun 2017 Avrama Blackwell George Mason University Marie Rooy
Calcium control of synaptic plasticity in striatal spiny projection neurons
1 Jun 2017 Michale Fee MIT Rupesh Kumar
Mechanisms underlying the developmental emergence of a complex learned motor program
Bird song is a complex behavior composed of multiple syllable types. In adult birds, each syllable is encoded by a distinct sequence of bursts in the premotor cortical area HVC. To understand how these sequences in HVC develop during vocal learning, we have recorded HVC neurons in juvenile zebra finches throughout vocal development. Early in vocal learning, neurons exhibited rhythmic (5-1 0 Hz), with different neurons active at different phases of the rhythm. As new syllables emerged, some neurons were shared across emerging syllable types, and the fraction of these shared neurons decreased over development. These results are generated by a rhythmic prototype motor sequence.
18 May 2017 Massimo Scanziani University of California Lyudmila Kushnir
Cortical Circuits of Vision
The diversity of neuron types and synaptic connectivity patterns in the cerebral cortex is astonishing. How this cellular and synaptic diversity contributes to cortical function is just beginning to emerge. Using the mouse visual system as an experimental model, I will discuss the mechanisms by which excitatory and inhibitory interactions between distinct neuron types contribute to the most basic operations performed by visual cortex. I will highlight how the functional and structural analysis of cortical circuits allows us to bridge the gap between system and cellular neuroscience.
16 May 2017 Florentin Worgotter University of Gottingen Giulio Bondanelli
The interaction of plasticity processes across time scales and their
role in memory dynamics
Many experiments provide evidences that, after learning, human and animal memories are very dynamic and changeable. Amongst others, one intriguing and counterintuitive effect is the destabilization of memories by recalling them. In addition, some of these destabilized memories can be ‘rescued’ by sleep-induced consolidation while others not. Up to now, the basic principles underlying these effects are widely unknown. In this talk I will present our theoretical model in which the interaction between the biologically well-established processes of synaptic plasticity and scaling enables the formation of memories or rather Hebbian cell assemblies in neural networks. Remarkably, the dynamics of these cell assemblies are comparable to the intriguing dynamics of human and animal memories described above. Furthermore, I will present our recent results about the usage of cell assemblies to solve nonlinear tasks as, for instance, learning and performing complex motor movements. Thus, this theoretical work serves as a further step to link biological processes on the neuronal scale to behavior.
9 May 2017 Alex Roxin CRM Barcelona Ivan Gordeli
A model of plasticity-dependent network activity in rodent hippocampus during exploration of novel environments
4 May 2017 Robert Gutig
Max Planck Institute of Experimental Medicine, Goettingen
Alireza Alemi
Spiking neurons can discover predictive features by aggregate-label learning

The brain routinely discovers sensory clues that predict opportunitiesor dangers. However, it is unclear how neural learning processes canbridge the typically long delays between sensory clues and behavioraloutcomes. Here, I introduce a learning concept, aggregate-labellearning, that enables biologically plausible model neurons to solvethis temporal credit assignment problem. Aggregate-label learningmatches a neuron’s number of output spikes to a feedback signal that isproportional to the number of clues but carries no information abouttheir timing. Aggregate-label learning outperforms stochasticreinforcement learning at identifying predictive clues and is able tosolve unsegmented speech-recognition tasks. Furthermore, it allowsunsupervised neural networks to discover reoccurring constellations ofsensory features even when they are widely dispersed across space and time.

13 Dec. 2016 Carlos Brody Princeton University Rupesh Kumar

Flexible sensorimotor routing in the rat

We trained rats in a task in which two cued sensorimotor associations could be rapidly reversed, from one trial to the next (Duan et al., Neuron 2015). I will describe behavioral, electrophysiological, optogenetic, and modeling data which suggest that the superior colliculus appears to play a surprisingly cognitive role in enabling the top-down executive control required to perform sensorimotor reversals in the task.

4 Oct. 2016 Anne Churchland Cold Spring Harbour Laboratory Francesca Mastrogiuseppe
A multisensory approach for understanding decision circuits
Despite numerous experiments on perceptual decision-making, fundamental questions about the underlying neural circuits remain unanswered. Specifically, little is known about circuits that weigh
different sources of information and integrate them to guide decisions. Some individual neurons have been associated with these computations, but most of our knowledge comes from decisions about a single, isolated sensory modality. Further, most decision studies are carried out in non-human primates, providing limited access to powerful tools for neural circuit dissection compared to rodents. As a result, little is known about how unisensory and multi sensory information is transformed across decision microcircuits spanning multiple areas. Almost nothing is understood about local microcircuits and how they support within-area computations that are fundamental to decision-making. My lab aims to define the neural circuits that allow animals to integrate evidence across time and sensory modalities to guide decisions. To achieve this, we join two previously separate fields, decision-making and multisensory integration, and bring this combined approach to rodents. Our current focus has been on neurons in the posterior parietal cortex (PPC). PPC receives diverse inputs and is involved in a dizzying array of behaviors. These many behaviours could rely on distinct categories of neurons specialized to represent particular variables or could rely on a single population of PPC neurons that is leveraged in different ways. To distinguish these possibilities, we evaluated rat PPC neurons recorded during multisensory decisions. Newly designed tests revealed that task parameters and temporal response features were distributed randomly across neurons, without evidence of categories. This suggests that PPC neurons constitute a dynamic network that is decoded according to the animal's present needs. To test for an additional signature of a dynamic network, we compared moments when behavioral demands differed: decision and movement. Our new state-space analysis revealed that the network explored different dimensions during decision and movement. These observations suggest that a single network of neurons can support the evolving behavioral demands of decision-making.
7 Jun. 2016 Omri Barak Technion University Mirjana Maras
Understanding trained recurrent neural networks
Recurrent neural networks are an important class of models for explaining neural computations. Recently, there has been progress both in training these networks to perform various tasks, and in relating their activity to that recorded in the brain. Despite this progress, there are many fundamental gaps towards a theory of these networks. Neither the conditions for successful learning, nor the dynamics of trained networks are fully understood. I will present the rationale for using such networks for neuroscience research, and a detailed analysis of very simple tasks as an approach to build a theory of general trained recurrent neural networks.
1 Dec. 2015 Adam Kepecs Cold Spring Harbord Labs Marie Rooy
How to spot confidence in the brain
Decision confidence is a forecast about the correctness of one’s decision. It is often regarded as a higher-order function of the brain requiring a capacity for metacognition that may be unique to humans. If confidence manifests itself to us as a feeling, how can then one identify it amongst the brain’s electrical signals in an animal? We tackle this issue by using mathematical models to gain traction on the problem of confidence, allowing us to identify neural correlates and mechanisms. I will present a normative statistical theory that enables us to establish that human self-reports of confidence are based on a computation of statistical confidence. Next, I will discuss computational algorithms that can be used to estimate confidence and decision tasks that we developed to behaviorally read out this estimate in humans and rats. Finally, I will discuss the neural basis of decision confidence and specifically the role of the orbitofrontal cortex.
29 Oct. 2015 Jakob Macke
Max Planck Institute for Biological Cybernetics, Tübingen  
David Schultz
Correlations and signatures of criticality in neural population models
Large-scale recording methods make it possible to measure the statistics of neural population activity and to gain insights into the principles that govern the collective activity of neural ensembles.  One hypothesis that has emerged from this approach is that neural populations are poised at a thermodynamic critical point. Support for this notion has come from a recent series of studies  which identified signatures of criticality (such as a divergence of the specific heat with population size) in the statistics of neural activity recorded from populations of retinal ganglion cells, and hypothesized that the retina might be optimised to be operating at this critical point.
What mechanisms can explain these observations?  Do they require the neural system to be fine-tuned to be poised at the critical point, or do they robustly emerge in generic circuits? How are signatures of criticality related to the structure of correlations within the neural population? We here show that these effects arise in a simple simulation of retinal population activity. They robustly appear across a range of parameters including biologically implausible ones, and can be understood analytically in a simple model. The specific heat diverges linearly  with population size n whenever the (average) correlation is independent of n— in particular, this is generally true when subsampling a large, correlated population.  These observations pose the question of whether signatures of criticality are indicative of an optimised coding strategy, or whether they arise as byproduct of sub-sampling a neural population with correlations.
22 Oct. 2015 Gasper Tkacick IST Vienna David Schultz Recent progress on reading the retinal code
6 Oct. 2015 Nathaniel Daw Princeton Francesca Mastrogiuseppe
Compulsion and the mechanisms of model-based reinforcement learning
Decisions and neural correlates of decision variables both indicate that humans and animals make decisions taking into account task structure. Such deliberative, "model-based" choice is thought to be important for overcoming habits and various sorts of compulsions, but there is still little evidence about the algorithmic or neural mechanisms that support it. I discuss recent studies attempting to address these questions. First, although it is widely envisioned that such model-based choices are supported by prospective computations at decision time, there are also indications that such behaviors may instead be produced by various sorts of precomputations. I present fMRI data from a sequential decision task in which states are tagged with decodable stimulus categories, which demonstrate a correspondence between predictive neural activity and other behavioral and neural signatures of model-based and model-free learning. This supports the widespread supposition that these behaviors are indeed supported by prospection. Second, I present some early and ongoing studies examining to what extent decisions are informed by representations of individual episodes, vs. statistics aggregated over multiple experiences as learned by typical algorithms, both model-based and model-free. Memory for episodes could support distinct computational approaches to the decision problem, including monte carlo and kernel methods, and also might support some apparently model-based behaviors. Finally, I discuss recent efforts to test the hypothesis that disorders of compulsion are related to deficient model-based learning. Although several patient studies appear to support this idea, there seems to be a lack of specificity in this effect, as patients with apparently non-compulsive disorders show similar deficits. We suspect that the issue relates to a much more general issue in psychiatry: the comorbidity and the poor specificity of psychiatric diagnoses. I present results from a large-scale online study of variation in psychiatric symptoms in a healthy population, which suggests a way to cope with these issues.
24 Jun. 2015 Adrienne Fairhall University of Washington Gabrielle Gutierrez Learning, variability and synchrony in birdsong
21 Jun. 2015 Saul Kato, Sean Escola, Yashar Ahmadian     Special Seminar
23 Jun. 2015 Jonathan Pillow Princeton Flora Bouchacourt
Unlocking single-trial dynamics in parietal cortex during decision-making
Neural firing rates in the macaque lateral intraparietal (LIP) cortex exhibit "ramping" firing rates, a phenomenon that is commonly believed to reflect the accumulation of sensory evidence during decision-making. However, ramping that appears in trial-averaged responses does not necessarily imply spike rate ramps on single trials; a ramping average could also arise from instantaneous steps  that occur at different times on different trials. In this talk, I will describe an approach to this problem based on explicit statistical latent-dynamical models of spike trains. We analyzed LIP spike responses using spike train models with: (1) ramping "accumulation-to-bound" dynamics; and (2) discrete "stepping" or "switching" dynamics. Surprisingly, we found that roughly three quarters of choice-selective neurons in LIP are better explained by a model with stepping dynamics. We show that the stepping model provides an accurate description of LIP spike trains, allows for accurate decoding of decisions, and reveals latent structure that is hidden by conventional stimulus-aligned analyses.
19 May 2015 Peter Dayan Gatsby, UCL David Schultz
Heuristics of Control: Habitization, Fragmentation, Memoization and Pruning
Goal-directed or model-based control faces substantial computational challenges in deep planning problems. One popular solution to this is habitual or model-free control based on state-action values. In a simple planning task, we found evidence for three further heuristics: blinkered pruning of the decision-tree, the storage and direct reproduction of sequences of actions arising from search, and hierarchical decomposition. I will discuss models of these and their interactions. This is work with Quentin Huys in collaboration with Niall Lally, Jon Roiser and Sam Gershman. 
27 Jan. 2015 Maneesh Sahani Gatsby, UCL Ralph Bourdoukan

Sensory dynamics and the visual environment

An oft-feted success of theoretical neuroscience lies in the uncovering of parallels between the way sensory systems represent and organise stimuli, and the statistical properties of natural environment. However, the greatest such successes have come with static stimuli, or else by treating the axis of time as essentially equivalent to space. In fact, sensory systems must learn to process temporally evolving stimuli through their own co-evolving dynamics. I shall describe two studies: one based on dynamical generative models, the other on dynamical recognition modelling, that seek to understand the dynamics of sensory representations, and thus to understand how properties of early visual representations reflect the transitory visual world.

16 Oct. 2014 Thomas Akam Champalimaud Centre for the Unknown, Lisbon Matthew Chalk

Oscillatory multiplexing of population codes for selective communication in neural circuits

Mammalian brains exhibit spatio-temporal patterns of network oscillation, the structure of which varies with behaviour. A longstanding hypothesis holds that changes in the structure of network
oscillations play a causal role in controlling effective connectivity between brain regions, though concrete evidence for or against this hypothesis remains elusive. One challenge in evaluating whether observed oscillatory activity is consistent with this proposed function is the lack of a clear quantitative picture about how such selective communication might work. We have approached this question from a neural coding perspective; asking what coding schemes and readout algorithms may support selective oscillatory communication. We argue that selective communication necessarily requires multiplexed coding; i.e. coding schemes in which a single spatio-temporal pattern of spike activity carries multiple independently accessible information channels. We propose that multiplexing is achieved through multiplicative modulation of firing rate population codes. In this coding scheme, variables are encoded into the spatial pattern of average firing rate over the oscillation cycle, with multiplicative oscillatory modulation used to create separate communication channels differentiated by the frequency or phase of modulation. We have identified readout mechanisms which allow a network to selectively respond to inputs with a specific modulation while ignoring distracting inputs; in principle allowing changes in oscillatory activity to control information flow. I will present work with spiking network simulations and simplified models which illustrate these ideas and identify constraints on those structures of oscillatory activity which can efficiently support selective oscillatory communication.

09 Sept. 2014 Tim Vogels University of Oxford, UK Veronika Koren The dance of excitation and inhibition - dynamics in balanced networks
8 July 2014 Surya Ganguli Department of Applied Phyisics, Stanford University, USA Dani Martí The functional contribution of synaptic complexity to learning and memory

An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function.

We also apply our theories to model the time course of learning gain changes in the rodent vestibular oculomotor reflex, both in wildtype mice, and knockout mice in which cerebellar long term depression is enhanced; our results indicate that synaptic complexity is necessary to explain diverse behavioral learning curves arising from the interactions of prior experience and enhanced LTD.

12 June 2014 Brent Doiron Theoretical Neuroscience Research Group, University of Pittsburgh, USA Dani Martí Breaking balance: how network architecture impacts spiking dynamics

Networks with strong recurrent excitation and inhibition that are roughly balanced to one another provide a model of cortex that accounts for a wide variety of observed dynamics. However, a formal adherence to balanced conditions often precludes the possibility for interesting macroscopic network dynamics.  I will review recent work that discusses how targeted deviations from a balanced state allow interesting macroscale dynamics, which capture rich cortical activity reported in experiments.  One deviation involves assembly structure within excitatory networks that promotes metastable population dynamics, capturing the reported distinction between spontaneous and evoked spiking dynamics.  These assemblies can be naturally embedded using a combination of spike timing dependent plasticity and homeostatic synaptic regulation.  A second deviation involves the extension of balanced networks to include a spatial dimension in cortical networks.  When input correlations are spatially broad then appreciable, yet moderate, noise correlations occur naturally and can match observed correlation distributions in a variety of cortical network.  The combination of these results show how structured architecture in balanced networks gives rise to important macroscopic cortical dynamics, yet nevertheless continues to provide the microscopic variability and rough asynchrony that are prominent feature of cortical spiking responses.

17 March 2014

Eric Shea-Brown University of Washington, Seattle, USA Dani Martí

Assembling coherence in neural populations

Experimental breakthroughs are yielding an unprecedented view of the brain's connectivity and of its coherent dynamics ---and a major challenge is to understand how the former leads to the latter. In our approach, we use graphical and point process methods to isolate the contribution of successively more-complex network features to coherent spiking. Next, we show how network features can be efficiently combined, yielding a set of low-order graph statistics we name "motif cumulants." These can be sampled experimentally, and appear to contain the necessary information to predict overall levels of coherence in a neural population. We close by asking what features of this coherence matter most --and least-- for the neural "coding" of information. This is joint work with Yu Hu, James Trousdale, Kresimir Josic, and Joel Zylberberg.


13 March 2014 Yashar Ahmadian Center for Theoretical Neuroscience, Columbia University, New York, USA Dani Martí Stability and computation in cortical circuits

Can cortical circuits self-organize into a stable asynchronous state despite massive amounts of recurrent excitation, without relying on single-neuronal saturation? I will show that strong and fast recurrent inhibition is sufficient to dynamically stabilize networks with strong recurrent excitation and an expansive rectified power-law nonlinearity.

I will then explore the consequences of such stabilization, and show how it accounts for various aspects of a wide range of contextual modulation effects like surround suppression and divisive normalization, which is a ubiquitous and canonical brain computation. Time allowing, I will also discuss some of the transient and time-dependent properties of such networks, in paricular how it can account for contextual influences on the characteristics of gamma rhythms in the visual cortex. 

6 February 2014 Tansu Celikel Radboud University Nijmegen, the Netherlands Fleur Zeldenrust Sensory signals for motor control during active sensing

Tactile exploration in the rodent whisker system is a sensorimotor process in which whisker touch modulates whisker motion. Understanding the neural computation underlying object localization requires quantification of this sensorimotor feedback in freely behaving animals. Here, using high- speed imaging of tactile exploration, we found that mice precisely match their whisk amplitude to anticipated object location in each whisk-cycle. This modulation does not depend on the current sensory input, but is controlled by the information collected during previous whisk-cycles. Timing and other properties of contact induced whisker deformations in the current cycle encode the error in animal's expectation for the object distance. We suggest a framework for sensorimotor computation during object localization where anticipatory and precise modulation of whisker protraction amplitude compensates recent body motion and encode object location, while properties of whisker contacts such as the timing relative to the motor pattern encode the error in this estimate.

23 January 2014 Geoffrey Schoenbaum NIH, USA Anatoly Buchin

Does the orbitofrontal cortex signal value?
(aka What the #@$$$$!!! is the orbitofrontal cortex doing up there?)

The orbitofrontal cortex is strongly implicated in good (or at least normal) “decision-making”. Key to good decision-making is knowing the general value or "utility" of available options.   Over the past decade, highly influential work has reported that the neurons in the orbitofrontal cortex signal this quantity.  Yet the orbitofrontal cortex is typically not necessary for apparent value-based behaviors unless those behaviors require value predictions to be derived from access to complex models of the task, and the neural correlates cited above only part of a much richer representation linking the characteristics of specific outcomes (sensory, timing, unique value) that are expected and the events associated with obtaining them.  In this talk, I will review these data to argue that this aspect of encoding in the orbitofrontal cortex is actually what is critical in explaining the role of this area in both behavior and learning, and that any contribution of this area to economic decision-making stems from its unique role in allowing value to be derived (both within and without) from these environmental models.

26 November 2013 Matthias Bethge Max Planck Institute for Biological Cybernetics, Tübingen, Germany Matthew Chalk Normative models and identification of nonlinear neural representations

Perceptual inference relies on very nonlinear processing of high-dimensional sensory inputs. This poses a challenge as the space of possible nonlinearities is huge and each of there functions might be implemented in many different ways. To gain better insights into the nonlinear processing of sensory signals in the brain, we are trying to make progress on two fronts: (1) By learning probabilistic representations of natural images, we explore which nonlinearities are most successful in capturing the degrees of freedom of the visual input. (2) We develop neural system identification methods with the aim of identifying unknown nonlinear properties of neurons that are difficult to identify by linear or generalized linear models. In this talk, I will first give a short summary of our results on natural image representations and our conclusions regarding effective nonlinearities (15min). In the second part (30min), I will talk about a system identification approach based on a new model called the spike-triggered mixture (STM) model. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood but we show that in practice this does not have to be an issue. By fitting the STM model to spike responses of vibrissal afferents we demonstrate that the STM model outperforms generalized linear and quadratic models by a large margin (up to 200 bits/s).

15 October 2012 Fred Wolf Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany Daniel Martí Understanding the Evolution of Neocortical Circuits

Over the past 65 million years, the evolution of mammals led --in several lineages-- to a dramatic increase in brain size. During this process, some neocortical areas, including the primary sensory ones,expanded by many orders of magnitude. The primary visual cortex, for instance, measured about a square millimeter in late cretaceous stem eutherians but in homo sapiens comprises more than 2000 mm2. If we could rewind time and restart the evolution of large and large brained mammals, would the network architecture of neocortical circuits take the same shape or would the random tinkering process of biological evolution generate different or even fundamentally distinct designs? In this talk, I will argue that, based on the consolidated mammalian phylogenies available now, this seemingly speculative question can be rigorously approached using a combination of quantitative brain imaging, computational, and dynamical systems techniques. Our studies on visual cortical circuit layout in a broad range of eutherian species indicate that neuronal plasticity and developmental network self-organization have restricted the evolution of neuronal circuitry underlying orientation columns to a few discrete design alternatives. Our theoretical analyzes predict that different evolutionary lineages adopt virtually identical circuit designs when using only qualitatively similar mechanisms of developmental plasticity.

1 October 2013 Daniel Butts University of Maryland, USA Matthew Chalk and Bernhard Englitz Gauging the influence of network activity on cortical neuron function

In addition to visual information from thalamus, neurons in primary visual cortex (V1) receive inputs from other V1 neurons, as well as from higher cortical areas. This “non-classical” input to cortical neurons, which can be inferred in part from the local field potential (LFP), can clearly modulate the “classical” feed-forward responses of cortical neurons to visual stimuli. We characterize this modulation in awake primate, using multi-electrode recordings to infer a model of neuron responses from both the stimulus and LFP. In general, the influence of the LFP can be stronger than that of the stimulus, suggesting that such network modulation plays a fundamental role of cortical neuron function. Because the LFP is shaped by a number of top-down processes, including saccadic eye movements, the combined LFP-stimulus model sets a foundation for understanding how such processes directly effect cortical stimulus processing. 

19 September 2013 Daniel Durstewitz Zi-Mannheim, Germany Flora Bouchacourt Physiology-Driven Models of Cortical Neurons & Networks: How Much Detail Do We Need?

Physiological neurons and networks are highly complex in terms of their biophysical, biochemical, morphological and anatomical ingredients. How much of that detail do we have to include in a physiologically realistic model? I will first give some examples that seem to suggest that biophysical details really matter for dynamical phenomena at the network level, but will then review some evidence that, on the contrary, seems to imply that much of this detail can be safely neglected. It will furthermore be argued that 'physiologically highly realistic' does not at all imply 'as much biophysical detail as possible' (sometimes even to the contrary), and will introduce some of our 'highly-data-driven' predictive physiological modeling approaches. 

27 June 2013 Eduardo J. Chichilnisky Salk Institute, USA Gabrielle Gutierrez  
21 Mei 2013 Raoul-Martin Memmesheimer Radboud University Nijmegen, the Netherlands Fleur Zeldenrust Learning Precisely Timed Patterns of Spikes

Experiments have revealed precisely timed patterns of spikes in several neuronal systems, raising the possibility that these temporal signals are used by the brain to encode and transmit sensory information. It is thus important to understand the capability of neural circuits to learn to produce stimulus specific temporally precise spikes. Learning to spike at given times is challenging since the spike threshold and the ensuing reset induce a strongly nonlinear dependence of the voltage on the value of the synaptic weights. We develop two learning algorithms, High Threshold Projection Learning and First Error Learning, that are inspired by the well known Perceptron rule and accomplish the task. High Threshold Projection Learning converges in finite time to exactly fit the desired spike pattern if is realizable. First Error Learning is a more biologically plausible rule, which converges to solutions with finite precision. The algorithms are employed to establish the capacity of a leaky integrate-and-fire neuron using a temporal code. We use theoretical considerations to derive the scaling of the capacity and to predict its numerical value in the low output rate regime. To show that our algorithms are able to learn behaviorally meaningful tasks from real neuronal data, we apply them to neuronal recordings of song birds. In addition, this proposes a novel way to estimate the information content carried by spike patterns which is accessible to neuronal architectures. Finally, we generalize our learning algorithms to perform learning of precise spike timing patterns in arbitrary neuronal architecture that includes feedback and recurrent connections.

2 April 2013 Kenneth Harris Imperial College London, UK Alexandre Hyafil The Neural Marketplace

The brain consists of billions of neurons, which together form the world’s most powerful information processing machine. The fundamental principles that allow these cells to organize into computing networks are unknown. This talk will describe a hypothesis for neuronal self-organization, in which competition for retroaxonal factors causes neurons to form functional networks, through processes akin to those of a free-market economy. Classically, neurons communicate by anterograde conduction of action potentials. However, information can also pass backward along axons, a process that is well characterized during the development of the nervous system. Recent experiments have shown that information about changes to a neuron's output synapses may pass backward along the axon, and cause changes in the same neurons inputs. Here we suggest a computational role for such "retroaxonal" signals in adult learning. We hypothesize that strengthening of a neuron’s output synapses stabilizes recent changes in the same neuron’s inputs. During learning, the input synapses of many neurons undergo transient changes, resulting in altered spiking activity. If this in turn promotes strengthening of output synapses, the recent synaptic changes will be stabilized; otherwise they will decay. A representation of sensory stimuli therefore evolves that is tailored to the demands of behavioral tasks. The talk will describe experimental evidence in support of this hypothesis, and a mathematical theory for how networks constructed along these principles can learn information-processing tasks.

7 February 2013 Wolfgang Maass Graz University of Technology, Austria Ralph Bourdoukan Does the Brain Play Dice?

A number of experimental data suggest that knowledge is encoded in the brain in the form of probability distributions over network states. I will present new results which examine this fact from the perspective of theory and modelling. Furthermore I will discuss new paradigms for stochastic computations in cortical microcircuits, that are able to make use of this type of knowledge representation. This provides a new perspective for many structural and dynamical features of cortical networks of neurons

22 January 2013 Tony Movshon New York University, USA Reinoud Maex Cortical and perceptual processing of naturalistic visual structure

The perception of complex visual patterns emerges from neuronal activity in a cascade of areas in the primate cerebral cortex. Neurons in the primary visual cortex (V1) represent information about local orientation and spatial scale, but the role of the second visual area (V2) is enigmatic. We made synthetic images that contain complex features found in naturally occurring visual textures, and used them to stimulate macaque V1 and V2 neurons. Most V2 cells respond more vigorously to these stimuli than to matched control stimuli lacking naturalistic structure, while V1 cells do not. fMRI measurements in humans reveal differences in V1 and V2 responses to the same textures that are consistent with neuronal measurements in macaque. Finally, the ability of human observers to detect naturalistic structure is well predicted by the strength of the neuronal and fMRI responses in V2 but not in V1. These results reveal a novel and particular role for V2 in the representation of naturally occurring structure in visual images, and suggest ways that it begins the transformation of elementary visual features into the specific signals about scenes and objects that are found in areas further downstream in the visual pathway.

15 January 2013 Matteo Carandini University College London, UK Matthew Chalk and Mehdi Keramati Wakefulness, locomotion, and navigation: a look from visual cortex

Most of what we know on primary visual cortex (V1) comes from experiments performed under anesthesia. Yet visual cortex is typically used by awake animals while they actively navigate an environment. I will describe three studies currently performed in my laboratory to investigate how visual processing in mouse V1 is affected by wakefulness, locomotion, and navigation.  The first study, by Bilal Haider, indicates that wakefulness dramatically enhances synaptic inhibition, abolishing the balance of excitation and inhibition typically seen in V1 under anesthesia. The second study, by Asli Ayaz, indicates that locomotion profoundly alters spatial integration, greatly reducing the surround suppression that is common in V1 neurons of stationary animals. The third study, by Aman Saleem, reveals that V1 signals are modulated by virtual navigation, in a way that is ideally suited to code for the visual stimuli created by locomotion in the environment. These results indicate that visual processing in mouse V1 is profoundly affected by wakefulness, locomotion, and navigation, and reinforce the need for studying the cerebral cortex during natural behavior. 

13 December 2012 John A. White University of Utah, USA Anatoly Buchin Neuronal Coherence in the In-Vivo-Like State

Evidence suggests that correlated electrical activity in hippocampus and other cortical structures is important for cognitive function. We have studied the mechanisms of synchronization and coherence using electrophysiological and computational methods. In particular, we have exploited methods for introducing real-time control in cellular electrophysiology. These techniques allow us to “knock in” virtual ion channels that can be controlled with great mathematical precision, and to immerse biological neurons in real-time, virtual neuronal networks. These manipulations allow us to test computationally-based hypotheses in living cells.  From this work, I will discuss which properties of single cells and neural networks seem crucial for coherent activity in the hippocampal formation, with emphasis on how coherent activity may arise in the absence of cellular oscillations.  If there is time, I will also discuss methods we have developed and exploited more recently for studying population activity *in vitro* and *in vivo*.

29 November 2012 Panayiota Poirazi IMBB, Crete, Greece David Barrett Coding with dendrites

The goal of this presentation is to provide a set of predictions generated by biophysical and/or abstract mathematical models regarding the role of dendrites in information coding across three different brain regions: the hippocampus, the prefrontal cortex and the amygdala. Towards this goal I will present modelling studies –along with supporting experimental evidence- that investigate how dendrites may be used to facilitate the coding of both spatial and temporal information at the single cell, the microcircuit and the neuronal network level. I will first discuss how the dendrites of individual CA1 pyramidal neurons may allow a single cell to discriminate between familiar versus novel memories and propagate this information to down stream cells [1]. I will then discuss how these dendritic nonlinearities may enable stimulus specificity in individual PFC pyramidal neurons during working memory [2] and underlie the emergence of sustained activity at the single cell and the microcircuit level [2,3]. Finally, I will present findings from our ongoing work regarding the role of dendrites in shaping the formation of fear memory engrams in the amygdala [4].

1.        Pissadaki, E.K., Sidiropoulou K., Reczko M., and Poirazi, P. “Encoding of spatio-temporal input characteristics by a single CA1 pyramidal neuron model” PLoS Comp. Biology, 2010 Dec;6(12): e1001038.
2.        Sidiropoulou, K. and Poirazi, P. “Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons” (PLoS Comp. Biology, 2012 April; 8(4): e1002489)
3.        Papoutsi, A., Sidiropoulou, K., and Poirazi, P. “Temporal Dynamics Predict State Transitions in a Prefrontal Cortex Microcircuit Model.” (submitted)
4.        Kastelakis G, Sidropoulou K and Poirazi P “Modeling the fear memory trace”, Hellenic Society for Neuroscience Annual Meeting, October, 2010.

20 November 2012 Germán Mato Centro Atómico Bariloche, Argentina David Barrett A Mechanism for Persistent Delayed Irregular Activity in Working Memory

Persistent activity in cortex is the neural correlate of working memory (WM). In persistent activity, spike trains are highly irregular, even more than in baseline. This seemingly innocuous feature challenges our current understanding of the synaptic mechanisms underlying WM. Here we argue that in WM the prefrontal cortex (PFC) operates in a regime of balanced excitation and inhibition and that the observed temporal irregularity reflects this regime. We show that this requires that nonlinearities underlying the persistent activity are primarily in the neuronal interactions between PFC neurons. We also show that short term synaptic facilitation can be the physiological substrate of these nonlinearities and that the resulting mechanism of balanced persistent activity is robust, in particular with respect to changes in the connectivity. As an example, we put forward a computational model of the PFC circuit involved in oculomotor delayed response task. The novelty of this model is that recurrent excitatory synapses are facilitating. We demonstrate that it displays direction selective persistent activity. We find that even though the memory eventually degrades because of the heterogeneities, for plausible network size and connectivity it can be stored for several seconds. This model accounts for a large number of experimental facts, such as that firing is more irregular during the persistent state than during baseline, that the neuronal responses are very diverse and that the preferred directions during cue and delay periods are strongly correlated but tuning widths are not.

11 September 2012 Alfonso Renart Champalimaud Centre for the Unknown, Lisbon, Portugal David Barrett Does the variability of sensory neurons constrain the accuracy of perception?

In perceptual decision making tasks, animals have to make choices about the nature of a sensory stimulus close to their psychophysical threshold. A body of work during the last two decades has shown that responses of sensory neurons during these tasks is variable across different presentations of the same stimulus, and that this variability is weakly, but significantly correlated with perceptual judgements about the stimulus. These correlations (referred to as choice probability -- CP) have been interpreted as representing a causal influence of sensory neurons on choice, which implies that variability in the activity of sensory neurons sets an upper bound on the accuracy of perception. I will review recent evidence which calls into question this interpretation, describe a simple model of perceptual decision making which suggests two alternative origins for measured CPs, and present results of a re-analysis of the data from classic perceptual decision making experiments in primates which supports this new interpretation. Overall, our work suggests that the accuracy of perception is less constrained by noisy sensory processing than previously thought.

4 July 2012 Carson Chow NIH and University of Pittsburgh, USA David Barrett and Mario DiPoppa Dynamics of Cortical Competition
4 July 2012 Gordon Pipa University of Osnabrueck, Germany David Barrett and Mario DiPoppa Self organisation of the liquid state machine
4 July 2012 Ole Jensen Radboud University Nijmegen, the Netherlands David Barrett and Mario DiPoppa On the functional role of phase of alpha oscillations