Solving the mysteries of bioscience
Foundational Science Fuels Breakthroughs
Inspiring Next-Generation Scientists
string(0) ""
Preview the speakers listed by session topic to gain further insight into conference programming.
Meet the speakers
The 3.5 day conference will focus on neural coding and dynamics at the brain-wide level, their implementation in evolved neural circuits, and the impact of theory and machine learning in understanding the brain.
Main Event Page
Columbia University
Talk Title: TBD
Abstract: TBD
Harvard University
Dmitriy Aronov
University of Oxford
Talk Title: Predictive coding: Effective learning with local plasticity
Abstract:Synaptic plasticity underlying leaning in biological neural networks only relies on information locally available to a synapse such as activity of presynaptic and postsynaptic neurons. This is a fundamental constraint on biological neural networks which shapes how they are organized. Hence it is important to understand how effective learning in large networks of neurons can be achieved within the constraint of local plasticity. This talk will review predictive coding, which is an influential model describing information processing in hierarchically organized cortical circuits. The talk will demonstrate that predictive coding network can learn equally or more effectively than artificial neural networks trained with backpropagation despite relying only on the local Hebbian plasticity. It will be also shown that when predictive coding networks are trained with inputs similar to those received by particular cortical regions (visual cortex and entorhinal cortex), then the neurons in the model develop representations seen in these regions (direction selectivity in visual cortex and grid cells in entorhinal cortex). This suggests that the diverse neural codes seen in different cortical regions can arise from the same learning algorithm, simply due to the difference in inputs they receive.
Lab website
Princeton University
University of Washington
Imperial College London
Talk Title: Estimating the uncertainty of inputs with prediction-error circuits
Abstract:At any moment, our brains receive a stream of sensory stimuli arising from the world we interact with. Simultaneously, neural circuits are shaped by feedback signals carrying predictions about the same inputs we experience. Those feedforward and feedback inputs often do not perfectly match. Thus, our brains have the challenging task of integrating these conflicting streams of information according to their reliabilities. However, how neural circuits keep track of both the stimulus and prediction uncertainty is not well understood. Here, we propose a network model whose core is a hierarchical prediction-error circuit. We show that our network can estimate the variance of the sensory stimuli and the uncertainty of the prediction using the activity of negative and positive prediction-error neurons. In line with previous hypotheses, we demonstrate that neural circuits rely strongly on feedback predictions if the perceived stimuli are noisy and the underlying generative process, that is, the environment is stable. Moreover, we show that predictions modulate neural activity at the onset of a new stimulus, even if this sensory information is reliable. In our network, the uncertainty estimation, and, hence, how much we rely on predictions, can be influenced by perturbing the intricate interplay of different inhibitory interneurons. We, therefore, investigate the contribution of those inhibitory interneurons to the weighting of feedforward and feedback inputs. Finally, we show that our network can be linked to biased perception and unravel how stimulus and prediction uncertainty contribute to the contraction bias.
Allen Institute for Neural Dynamics
Talk Title: Structure and function of locus coeruleus norepinephrine neurons
Abstract: Norepinephrine (NE) is released through most of the nervous system from neurons in the locus coeruleus (LC). We found a relationship between the structure and function of these neurons in mice. Axonal projections of individual neurons were largely confined within brain regions and correlated with a gradient of gene expression in LC. LC-NE neurons showed two types of action potentials, wide (type I) and narrow (type II). Ascending projections were mostly type I and descending projections were mostly type II. In a behavioral task requiring ongoing learning, type I neurons were excited by reward prediction error, a key signal driving learning, and predicted future changes in behavior. Type II neurons were excited by lack of reward and predicted the opposite changes in future behavior. Together, these observations reveal a modular structure and function of a neurotransmitter system and show that it acts as a quantitative learning signal.
New York University
Talk Title: Multi-regional circuit mechanisms of value-based decisions
Abstract: The value of the environment determines animals’ motivational states and sets expectations for error-based learning. But how are values computed? We developed a novel temporal wagering task with latent structure, and used high-throughput behavioral training to obtain well-powered behavioral datasets from hundreds of rats that learned the structure of the task. We found that rats use distinct value computations for sequential decisions within single trials. Moreover, these sequential decisions are supported by different brain regions, suggesting that distinct neural circuits support specific types of value computations. I will discuss our ongoing efforts to delineate how distributed circuits in the orbitofrontal cortex and striatum coordinate complex value-based decisions.
Georgia Institute of Technology
University of California, Berkeley
Talk Title: Flexible circuit computations of the Drosophila head direction network
Website profile
Stanford University
Talk Title: Learning and adapting the structure of neural maps on behavioral timescales
Abstract: Over the last several decades, the tractable response properties of parahippocampal neurons have provided a new access key to understanding the cognitive process of self-localization: the ability to know where you are currently located in space. Defined by functionally discrete response properties, neurons across multiple brain regions are proposed to provide the basis for an internal neural map of space, which enables animals to perform path-integration based spatial navigation and supports the formation of spatial memories. My lab focuses on understanding the mechanisms that generate this neural map of space and how this map is used to support behavior. In this talk, I’ll discuss how our internal neural maps of space adapt and exhibit plasticity in novel environments.
Brandeis University
Sainsbury Wellcome Centre
Friedrich Miescher Institute for Biomedical Research
Ashok Litwin-Kumar
Talk Title: Models of task-switching recurrent neural networks
Abstract: I will describe theoretical work we have developed to understand the dynamics of networks that are constructed to support the generation of a large number of low-dimensional dynamics.
Champalimaud Centre for the Unknown
The École polytechnique fédérale de Lausanne
Abstract:
California Institute of Technology
Duke University
Ecole Normale Superieure Paris
The Rockefeller University
Google DeepMind
Max Planck Florida Institute for Neuroscience
Talk Title: Representation learning through predictive synaptic plasticity
Abstract:Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains perform this processing in deep sensory networks shaped through plasticity. However, our understanding of the underlying plasticity mechanisms remains rudimentary. I will introduce Latent Predictive Learning (LPL), a plasticity model prescribing a local learning rule that combines Hebbian elements with predictive plasticity. I will show that deep neural networks equipped with LPL develop disentangled object representations without supervision. The same rule accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Finally, our model generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity (STDP). LPL thus constitutes a plausible normative theory of representation learning in the brain while making concrete testable predictions.
Mar 14, 2025
Mar 12, 2025
Mar 15, 2025 | 10:00AM-5:00PM
Mar 17 - 19, 2025
Mar 26, 2025