Solving the mysteries of bioscience
Foundational Science Fuels Breakthroughs
Inspiring Next-Generation Scientists
string(0) ""
Preview the speakers and workshops below to gain further insight into conference programming.
Main EVENT WEBSITE
Affiliation: University of Tübingen, MPI Tübingen
Talk Title: Building mechanistic models of neural computations with simulation-based machine learning
Abstract: Experimental techniques now make it possible to measure the structure and function of neural circuits at an unprecedented scale and resolution. How can we leverage this wealth of data to understand how neural circuits perform computations underlying behaviour? A mechanistic understanding will require models that align with experimental measurements and biophysical mechanisms, while also being capable of performing behaviorally relevant computations. Building such models has remained a central challenge. I will present our work on addressing his challenge: We have developed machine learning methods and differentiable simulators that make it possible to algorithmically identify models that link biophysical mechanisms, neural data, and behaviour. I will show how these approaches—in combination with modern connectomic measurements—make it possible to build large-scale mechanistic models of the fruit fly visual system, and how such a model can make experimentally testable predictions for each neuron in the system.
Affiliation: Allen Institute for Neural Dynamics
Talk Title: TBA
Abstract: TBA
Talk Title: A motor observatory: capturing diverse mouse behavior
Abstract: In this talk, we will introduce the motor observatory, a new rig aimed at capturing the 3D kinematics and physiology of diverse mouse behavior. We will detail our current plans for a mouse playground and the technical challenges in capturing mouse kinematics. Next, we will introduce our vision for a robust and easy to set up process for 3D tracking including: a new web-based annotation app for 3D annotation, a web-based app for visualizing and annotating behavior, and early developments in semi-supervised pose estimation.
Affiliation: University of Oregon
Talk Title: Experience-dependent reorganization of inhibitory neuron synaptic connectivity
Abstract: Organisms continually tune their perceptual systems to the features they encounter in their environment. We have studied how ongoing experience reorganizes the synaptic connectivity of neurons in the olfactory (piriform) cortex of the mouse. We developed an approach to measure synaptic connectivity in vivo, training a deep convolutional network to reliably identify monosynaptic connections from the spike-time cross-correlograms of 4.4 million single-unit pairs. This revealed that excitatory piriform neurons with similar odor tuning are more likely to be connected. We asked whether experience enhances this like-to-like connectivity but found that it was unaffected by odor exposure. Experience did, however, alter the logic of interneuron connectivity. Following repeated encounters with a set of odorants, inhibitory neurons that responded differentially to these stimuli exhibited a high degree of both incoming and outgoing synaptic connections within the cortical network. This reorganization depended only on the odor tuning of the inhibitory interneuron and not on the tuning of its pre- or postsynaptic partners. A computational model of this reorganized connectivity predicts that it increases the dimensionality of the entire network’s responses to familiar stimuli, thereby enhancing their discriminability. We confirmed that this network-level property is present in physiological measurements, which showed increased dimensionality and separability of the evoked responses to familiar versus novel odorants. Thus, a simple, non-Hebbian reorganization of interneuron connectivity may selectively enhance an organism’s discrimination of the features of its environment.
Affiliation: Allen Institute for Brain Science
Talk Title: Allen Common Coordinate Framework: 10-Year Anniversary
Abstract: Allen Mouse Brain Common Coordinate Framework (CCFv3, Wang et al, 2020) is a 3D reference space is an average brain at 10um voxel resolution created from serial two-photon tomography images of 1,675 young adult C57Bl6/J mice. Using multimodal reference data, the entire brain parcellated directly in 3D, labeling every voxel with a brain structure spanning 43 isocortical areas and their layers, 314 subcortical gray matter structures, 81 fiber tracts, and 8 ventricular structures. Annotation is supported through informatics analysis of gene expression, connectivity, topography and novel curved cortical coordinate system. The CCF is freely accessible, providing an anatomical infrastructure for the quantification, integration, visualization and modeling of the large-scale data sets for the Allen Institute for Brain Science and the entire neuroscience community. Since its first release in 2015, the Allen CCF have been widely adopted by the scientific community supporting the integration of transcriptomics, morphology, connectivity, electrophysiology and behavior experiments acting as integration platform for data, derived analysis and scientific models. It is the spatial coordinate system for many of the large neuroscience initiatives including Allen Institute for Brain Science, BRAIN Initiative Cell Census Network, BRAIN Initiative Cell Atlas Network, BRAIN CONNECTS, International Brain Laboratory, Janelia’s MouseLight Project and EU-funded Human Brain Project.
Affiliation: Lawrence Berkeley National Laboratory
To learn more about Chris and his research, click here.
Affiliation: Caltech, Google Research & Caltech
Talk Title: The Anatomy of a Foundation Model
Abstract: In this talk we’ll discuss the anatomy of building a custom foundation model from scratch. We will work from the bottom, starting with tokenization, and working up to adapting transformers to uncommon representations.
To learn more about Yisong and their research, click here.
Affiliation: Stanford University
Talk Title: State Space Models for Natural and Artificial Intelligence
Abstract: New recording technologies are revolutionizing neuroscience, allowing us to measure the spiking activity of large numbers of neurons in freely behaving animals. These technologies offer exciting opportunities to link brain activity to behavioral output, but they also pose statistical challenges. Neural and behavioral data are noisy, high-dimensional time series with nonlinear dynamics and substantial variability across subjects. I will present our work on state space models (SSMs) to tackle these challenges. The key idea is that high-dimensional measurements often reflect the evolution of underlying latent states, whose dynamics may shed light on neural computation. For example, we have used SSMs to study how attractor dynamics in the hypothalamus encode persistent internal states during social interaction, and to connect stereotyped movements to moment-to-moment fluctuations in brain activity. There has been a resurgence of interest in SSMs within the machine learning community as well, and SSMs now form the backbone of several state-of-the-art models for sequential data. I will present recent work from my lab that focuses on novel models and efficient algorithms for sequential data, with applications in neuroscience and beyond. Together, these projects highlight the central role of state space models in our studies of both biological and artificial intelligence.
Affiliation: University College London (SWC)
Talk Title: Cross-specie study of statistical learning – from behavior to mechanism
Abstract: A defining feature of animal intelligence is the ability to discover and update knowledge of statistical regularities in the sensory environment, in service of adaptive behaviour. This allows animals to build appropriate priors, in order to disambiguate noisy inputs, make predictions and act more efficiently. Despite decades of research in the field of human cognition and theoretical neuroscience, it is not known how such learning can be implemented in the brain. By combing sophisticated cognitive tasks in humans, rats, and mice, as well as neuronal measurements and perturbations in the rodent brain and computational modelling, we seek to build a multi-level description of how regularities in temporally extended tasks are learned and utilised. In this talk, I will specifically focus on a cross-species model to study statistical learning, in both feedback-based and non-feedback-based settings.
Affiliation: Salk Institute for Biological Studies
Talk Title: From raw data to virtual brains, bodies and behavior
Abstract: A core goal of neuroscience is to understand how the brain adaptively orchestrates movements to execute complex behaviors. Quantifying behavioral dynamics, however, has historically been prohibitively laborious or technically intractable, particularly for unconstrained and naturalistic behaviors which the brain evolved to produce. Leveraging advances in computer vision and deep learning, tools like SLEAP have made markerless motion capture in animals increasingly accessible, enabling new approaches to quantify behavior. In this talk, we will discuss recent advances in tools for behavioral quantification and provide an outlook on the technology development. We will finish by showing how this form of data can enable novel approaches to faithfully model how behavior is produced by emulating brains and bodies using physics simulation and deep reinforcement learning.
To learn more about Talmo and their research, click here.
Affiliation: University of Washington
Talk Title: Learning robust neural and behavioral representations
Abstract: With an increasing ability to record neural activity and correlate it with behavior there is a need for methods to encode neural activity and behavior into interpretable representations and to decode behavioral states from these representations. These methods are expected to be robust and generalizable due to the expected applicability of the methods to various recording signals, species recorded from and behavioral tasks. In this talk, I will describe deep learning methods that we are developing towards these aims and their underlying principles that contribute to their robustness, such as incorporation of spatio-temporal, causal and active learning paradigms.
Affiliation: Harvard Medical School
Talk Title: Bridging neuronal activity to perception in the era of deep learning
Abstract: Hypothesis-driven research has long been the gold standard in science, yet it is only as powerful as the hypotheses we are able to generate. In visual neuroscience, deep learning models have emerged as important sources for new ideas, suggesting ways to define how neurons in the primate brain encode what we see. In this talk, we will discuss how machine learning–based models of vision serve as both tools and conceptual frameworks to solve decades-old questions about neuronal representations and the organization of visual areas in the monkey. We will also consider whether human imagination alone can address the many scientific problems before us, or if we should embrace this new “alien-like” artificial intelligence to help us advance scientific knowledge.
Affiliation: Champalimaud Center for the Unknown / IBL
Talk Title: Maintaining an open-source neurophysiology dataset in the age of AI: an IBL case-study
Abstract: Recent developments in AI underlined the effectiveness of scaling models using large corpuses of data. In our systems neuroscience field, this sparked initiatives to build unified brain-behaviour models. Yet while amounts of training data drive advances, neuroscience experiments are difficult, expensive, and ethically non-trivial. Here we illustrate the practicalities and difficulties of maintaining such valuable existing open datasets, by using our experience with the IBL data as an example. The IBL dataset is a collection of brain-wide experiments from a single task with multiple recording modalities. We first look at the quality assurance of the experimental data for each modality. From there we make a distinction between limitations inherent to the experiment and those inherent to the open problems of pre-processing such as spike sorting. We argue for the need of keeping the dataset alive with regular re-processings when breakthroughs happen. We describe the associated data management aspects, ie. the revisions and addendums that those evolutions imply. At last, we look at current modelling attempts and explain how data handling that allows longitudinal analysis of hundreds or more experiments are still uncommon and at their inception in our field, making for both daunting and exciting future directions.
Affiliation: Flatiron Institute
Talk Title: Browser-Based Collaborative Neuroscience: Transforming Data Exploration and Analysis Through Web Accessibility
Abstract: Modern neuroscience faces a dual challenge: managing increasingly complex datasets while fostering collaboration among researchers. While online repositories like DANDI and OpenNeuro have made data sharing possible through standardized formats like NWB, the field needs tools that go beyond simple archival capabilities. Traditional software barriers, including complex installation procedures and large data downloads, have hindered seamless exploration and analysis of shared datasets. In this talk I will introduce Neurosift and related web-based tools that address these challenges. Through zero-installation visualization tools and remote data streaming capabilities, researchers can explore the contents of large data files (especially NWB files hosted remotely), without downloading entire datasets. The platform’s extensible plugin architecture supports diverse data types, from imaging series to spike trains, while distributed processing enables compute-intensive tasks. Furthermore, by integrating AI-powered natural language interactions, Neurosift democratizes access to advanced analysis capabilities, bringing sophisticated neuroscience tools directly to researchers’ web browsers. This approach represents a shift toward more accessible, collaborative neuroscience research, where analysis tools are as readily available as the data they examine.
Affiliation: Stanford
Affiliation: University of Montreal
Talk Title: Visual learning generalization in humans and artificial neural networks: The role of curriculum
Abstract: Visual learning enables humans and animals to enhance perceptual task performance through practice. The true value of this learning lies in its ability to generalize beyond training conditions, yet a persistent challenge is that visual learning often shows high specificity—failing to transfer to new situations, similar to overfitting in machine learning. This specificity varies across tasks, limiting practical applications in visual rehabilitation and expert training contexts such as radiology. Understanding the mechanisms driving generalization versus specificity in visual learning, and developing methods to predict and improve transfer across conditions, remains critically important. This talk will present current research from my lab using deep learning and artificial neural networks as computational tools to model, predict, and ultimately enhance generalization in visual learning. Our comparative approach between humans and artificial neural networks offers insights into the role of training curricula for better generalization.
To learn more about Shahab and their research, click here.
Affiliation: Cambridge University
Talk Title: Connectomics on the Fly
Abstract: The fruit fly Drosophila melanogaster is an excellent model organism for the study of neural circuits ranging from sensory processing all the way to motor control. With collaborators we have recently completed multiple dense, synapse-resolution connectomes of the fly’s brain and nerve cord. This provides templates to build models, generate hypotheses and design experiments. This talk will introduce these new datasets and show how they can be connected to other information such as neurotransmitter identity. We will present lessons learned for interpreting connectomic data by comparing connectomes of different fly brains. Finally, we will discuss how these resources are being used to build circuit models.
Affiliation: University of Southern California