Solving the mysteries of bioscience
Foundational Science Fuels Breakthroughs
Inspiring Next-Generation Scientists
string(0) ""
Preview the speakers and workshops below to gain further insight into conference programming.
Main EVENT WEBSITE
Affiliation: Department of Neuroscience, University of Copenhagen
Talk Title: Open by Design: Advancing Neurophysiology through Collaborative Tools and Experimental Insights
Abstract: TBA
To learn more about Peter and their research, click here.
Affiliation: University of Tübingen, MPI Tübingen
Talk Title: Building mechanistic models of neural computations with simulation-based machine learning
Abstract: Experimental techniques now make it possible to measure the structure and function of neural circuits at an unprecedented scale and resolution. How can we leverage this wealth of data to understand how neural circuits perform computations underlying behaviour? A mechanistic understanding will require models that align with experimental measurements and biophysical mechanisms, while also being capable of performing behaviorally relevant computations. Building such models has remained a central challenge. I will present our work on addressing his challenge: We have developed machine learning methods and differentiable simulators that make it possible to algorithmically identify models that link biophysical mechanisms, neural data, and behaviour. I will show how these approaches—in combination with modern connectomic measurements—make it possible to build large-scale mechanistic models of the fruit fly visual system, and how such a model can make experimentally testable predictions for each neuron in the system.
Affiliation: Allen Institute for Neural Dynamics
Talk Title: The AIND Data Platform for Neurophysiology and Neuroanatomy: Advancing Collaborative and Open Science
Abstract: Recent advances in neuroscience are enabling us to collect rich, multi-modal, high dimensional data to ask questions about brain wide circuits and their functions. Answering these questions requires us to link together molecular, anatomical, physiological and behavioral data. As the data themselves become richer and more complex, so do the quantitative methods for analyzing them. To effectively extract knowledge from these data and test theories that explain whole-brain cellular activity, we must collaborate faster with globally distributed scientists with expertise in these different subdomains. At the Allen Institute for Neural Dynamics, we are building a data platform to foster collaboration and open science. This platform is centered on the cloud for both public data storage and scalable compute. The full data workflow and provenance is accessible to users, who can access both raw and derived data assets. Each data asset is documented using the AIND metadata schema, which details how the data was acquired and processed, as well as quality control evaluations of the data. This metadata record facilitates identify assets based on chosen criteria as well as understanding features of the data and its experimental context. Finally, we leverage a cloud-based data analytics platform to build containerized analysis workflows to assure reproducible and scalable analyses that can be easily shared with collaborators.
To learn more about Saskia and their work, click here.
Affiliation: University of California San Diego
Talk Title: How does the mouse compute the coordinate of a vibrissa touch?
Abstract: As first discussed by Descartes, haptic sensing can detect distant objects when the forces of touch are transmitted through feelers to a receptor organ, e.g., via the vibrissae of animals or by sticks held by humans. What neural signals are available to compute the egocentric coordinates of indirect touch? My colleagues address this issue for the thalamic signals that drive cortical computation, using behavior, imaging, and new glutamate-sensors from the Allen (How is “open data” best used to assess and improve analytical tools?). I present new data, and a decoding scheme, to compute the location of vibrissa-mediated touch at a distance. (What is the way forward for “open science” to assess alternate computational models?).
To learn more about David and their research, click here.
Talk Title: A motor observatory: capturing diverse mouse behavior
Abstract: In this talk, we will introduce the motor observatory, a new rig aimed at capturing the 3D kinematics and physiology of diverse mouse behavior. We will detail our current plans for a mouse playground and the technical challenges in capturing mouse kinematics. Next, we will introduce our vision for a robust and easy to set up process for 3D tracking including: a new web-based annotation app for 3D annotation, a web-based app for visualizing and annotating behavior, and early developments in semi-supervised pose estimation.
Affiliation: University of Oregon
Talk Title: TweetyBERT: Automated parsing of birdsong through self-supervised machine learning
Abstract: Deep neural networks can be trained to parse animal vocalizations – serving to identify the units of communication, and annotating sequences of vocalizations for subsequent statistical analysis. However, current methods rely on human labelled data for training. The challenge of parsing animal vocalizations in a fully unsupervised manner remains an open problem. Addressing this challenge, we introduce TweetyBERT, a self-supervised transformer neural network developed for analysis of birdsong. The model is trained to predict masked or hidden fragments of audio, but is not exposed to human supervision or labels. Applied to canary song, TweetyBERT autonomously learns the behavioral units of song such as notes, syllables, and phrases – capturing intricate acoustic and temporal patterns. This approach of developing self-supervised models specifically tailored to animal communication will significantly accelerate the analysis of unlabeled vocal data, and may prove useful for analysis of neural signals as well.
Talk Title: Experience-dependent reorganization of inhibitory neuron synaptic connectivity
Abstract: Organisms continually tune their perceptual systems to the features they encounter in their environment. We have studied how ongoing experience reorganizes the synaptic connectivity of neurons in the olfactory (piriform) cortex of the mouse. We developed an approach to measure synaptic connectivity in vivo, training a deep convolutional network to reliably identify monosynaptic connections from the spike-time cross-correlograms of 4.4 million single-unit pairs. This revealed that excitatory piriform neurons with similar odor tuning are more likely to be connected. We asked whether experience enhances this like-to-like connectivity but found that it was unaffected by odor exposure. Experience did, however, alter the logic of interneuron connectivity. Following repeated encounters with a set of odorants, inhibitory neurons that responded differentially to these stimuli exhibited a high degree of both incoming and outgoing synaptic connections within the cortical network. This reorganization depended only on the odor tuning of the inhibitory interneuron and not on the tuning of its pre- or postsynaptic partners. A computational model of this reorganized connectivity predicts that it increases the dimensionality of the entire network’s responses to familiar stimuli, thereby enhancing their discriminability. We confirmed that this network-level property is present in physiological measurements, which showed increased dimensionality and separability of the evoked responses to familiar versus novel odorants. Thus, a simple, non-Hebbian reorganization of interneuron connectivity may selectively enhance an organism’s discrimination of the features of its environment.
Affiliation: Allen Institute for Brain Science
Talk Title: Allen Common Coordinate Framework: 10-Year Anniversary
Abstract: Allen Mouse Brain Common Coordinate Framework (CCFv3, Wang et al, 2020) is a 3D reference space is an average brain at 10um voxel resolution created from serial two-photon tomography images of 1,675 young adult C57Bl6/J mice. Using multimodal reference data, the entire brain parcellated directly in 3D, labeling every voxel with a brain structure spanning 43 isocortical areas and their layers, 314 subcortical gray matter structures, 81 fiber tracts, and 8 ventricular structures. Annotation is supported through informatics analysis of gene expression, connectivity, topography and novel curved cortical coordinate system. The CCF is freely accessible, providing an anatomical infrastructure for the quantification, integration, visualization and modeling of the large-scale data sets for the Allen Institute for Brain Science and the entire neuroscience community. Since its first release in 2015, the Allen CCF have been widely adopted by the scientific community supporting the integration of transcriptomics, morphology, connectivity, electrophysiology and behavior experiments acting as integration platform for data, derived analysis and scientific models. It is the spatial coordinate system for many of the large neuroscience initiatives including Allen Institute for Brain Science, BRAIN Initiative Cell Census Network, BRAIN Initiative Cell Atlas Network, BRAIN CONNECTS, International Brain Laboratory, Janelia’s MouseLight Project and EU-funded Human Brain Project.
Affiliation: Lawrence Berkeley National Laboratory
Talk Title: AI-assisted workflows for integrating data using LinkML
To learn more about Chris and his research, click here.
Affiliation: Stanford University
Talk Title: State Space Models for Natural and Artificial Intelligence
Abstract: New recording technologies are revolutionizing neuroscience, allowing us to measure the spiking activity of large numbers of neurons in freely behaving animals. These technologies offer exciting opportunities to link brain activity to behavioral output, but they also pose statistical challenges. Neural and behavioral data are noisy, high-dimensional time series with nonlinear dynamics and substantial variability across subjects. I will present our work on state space models (SSMs) to tackle these challenges. The key idea is that high-dimensional measurements often reflect the evolution of underlying latent states, whose dynamics may shed light on neural computation. For example, we have used SSMs to study how attractor dynamics in the hypothalamus encode persistent internal states during social interaction, and to connect stereotyped movements to moment-to-moment fluctuations in brain activity. There has been a resurgence of interest in SSMs within the machine learning community as well, and SSMs now form the backbone of several state-of-the-art models for sequential data. I will present recent work from my lab that focuses on novel models and efficient algorithms for sequential data, with applications in neuroscience and beyond. Together, these projects highlight the central role of state space models in our studies of both biological and artificial intelligence.
Affiliation: Institut de Neurosciences des Systèmes, INSERM 1106, Aix-Marseille Université, France
Talk Title: Brain-body rhythms at circadian and multiple day timescales
Abstract: Circadian and multiple day (multidien) are present in most organisms on Earth. If circadian rhythms are well studied, multidien rhythms remain poorly understood. Using multimodal recordings of the brain, heart, respiratory, gastric, and core temperature in control and epileptic rats, I will show how slow (circadian) and ultraslow (multidien) rhythms, and brain-body interactions need to be accounted for to understand brain functions and actions.
Affiliation: University of Washington
Talk Title: From raw data to virtual brains, bodies and behavior
Abstract: A core goal of neuroscience is to understand how the brain adaptively orchestrates movements to execute complex behaviors. Quantifying behavioral dynamics, however, has historically been prohibitively laborious or technically intractable, particularly for unconstrained and naturalistic behaviors which the brain evolved to produce. Advanced tools in computer vision and deep learning have made markerless motion capture in animals increasingly accessible, enabling new approaches to quantify behavior. In this talk, we will discuss recent advances in tools for behavioral quantification and provide an outlook on the technology development. We will finish by showing how this form of data can enable novel approaches to faithfully model how behavior is produced by emulating brains and bodies using physics simulation and deep reinforcement learning.
To learn more about Bing and their research, click here.
Talk Title: Towards theory-guided deep learning models of probabilistic sensory perception
Abstract: Understanding how the brain transforms sensory information into decisions remains a fundamental challenge in computational neuroscience. While recent deep learning “encoding” models can predict neural responses to natural stimuli with remarkable accuracy, they provide limited insight into how the brain interprets sensory information to guide behavior. Classical theories propose the brain performs Bayesian inference using internal generative models, but empirical validation has been largely qualitative. In this talk, I will present a novel approach that bridges purely data-driven encoding models and theory-based approaches through deep learning architectures constrained by probabilistic inference principles. Unlike conventional models, our theory-guided approach may uncover the brain’s generative model, revealing interpretable components linking encoding and decoding. I will then propose targeted experimental paradigms to critically evaluate these theoretically-constrained models, offering promising avenues for uncovering new insights into probabilistic computation in the brain.
To learn more about Edgar and their research, click here.
Affiliation: Harvard Medical School
Talk Title: Bridging neuronal activity to perception in the era of deep learning
Abstract: Hypothesis-driven research has long been the gold standard in science, yet it is only as powerful as the hypotheses we are able to generate. In visual neuroscience, deep learning models have emerged as important sources for new ideas, suggesting ways to define how neurons in the primate brain encode what we see. In this talk, we will discuss how machine learning–based models of vision serve as both tools and conceptual frameworks to solve decades-old questions about neuronal representations and the organization of visual areas in the monkey. We will also consider whether human imagination alone can address the many scientific problems before us, or if we should embrace this new “alien-like” artificial intelligence to help us advance scientific knowledge.
Talk Title: Learning robust neural and behavioral representations
Abstract: With an increasing ability to record neural activity and correlate it with behavior there is a need for methods to encode neural activity and behavior into interpretable representations and to decode behavioral states from these representations. These methods are expected to be robust and generalizable due to the expected applicability of the methods to various recording signals, species recorded from and behavioral tasks. In this talk, I will describe deep learning methods that we are developing towards these aims and their underlying principles that contribute to their robustness, such as incorporation of spatio-temporal, causal and active learning paradigms.
Affiliation: Champalimaud Center for the Unknown, IBL
Talk Title: Maintaining an open-source neurophysiology dataset in the age of AI: an IBL case-study
Abstract: Recent developments in AI underlined the effectiveness of scaling models using large corpuses of data. In our systems neuroscience field, this sparked initiatives to build unified brain-behaviour models. Yet while amounts of training data drive advances, neuroscience experiments are difficult, expensive, and ethically non-trivial. Here we illustrate the practicalities and difficulties of maintaining such valuable existing open datasets, by using our experience with the IBL data as an example. The IBL dataset is a collection of brain-wide experiments from a single task with multiple recording modalities. We first look at the quality assurance of the experimental data for each modality. From there we make a distinction between limitations inherent to the experiment and those inherent to the open problems of pre-processing such as spike sorting. We argue for the need of keeping the dataset alive with regular re-processings when breakthroughs happen. We describe the associated data management aspects, ie. the revisions and addendums that those evolutions imply. At last, we look at current modelling attempts and explain how data handling that allows longitudinal analysis of hundreds or more experiments are still uncommon and at their inception in our field, making for both daunting and exciting future directions.
Affiliation: Flatiron Institute
Talk Title: Browser-Based Collaborative Neuroscience: Transforming Data Exploration and Analysis Through Web Accessibility
Abstract: Modern neuroscience faces a dual challenge: managing increasingly complex datasets while fostering collaboration among researchers. While online repositories like DANDI and OpenNeuro have made data sharing possible through standardized formats like NWB, the field needs tools that go beyond simple archival capabilities. Traditional software barriers, including complex installation procedures and large data downloads, have hindered seamless exploration and analysis of shared datasets. In this talk I will introduce Neurosift and related web-based tools that address these challenges. Through zero-installation visualization tools and remote data streaming capabilities, researchers can explore the contents of large data files (especially NWB files hosted remotely), without downloading entire datasets. The platform’s extensible plugin architecture supports diverse data types, from imaging series to spike trains, while distributed processing enables compute-intensive tasks. Furthermore, by integrating AI-powered natural language interactions, Neurosift democratizes access to advanced analysis capabilities, bringing sophisticated neuroscience tools directly to researchers’ web browsers. This approach represents a shift toward more accessible, collaborative neuroscience research, where analysis tools are as readily available as the data they examine.
Affiliation: Stanford
Talk Title: Bringing Neuroscience into the DevOps Era
Abstract: Neuroscience research has been increasingly dependent on digital data analysis as the volume of data acquired continues to grow. With this explosion of data, new challenges emerge in how to appropriately acquire, curate, analyze, and share this data for research purposes. This talk will share open-source DevOps tools from IT best practices spanning realtime data acquisition, trusted timestamping of data and code, containerized pipelines for data analysis, and a framework for accessibly and permanently sharing all outputs of research. Such standardized data organization methods are prerequisites for large-scale computational models. Importantly, these tools can be leveraged to easily comply with FAIR standards and upcoming OSTP data accessibility mandates, and, in turn, improve the reproducibility, rigor, and transparency of neuroscience research.
Affiliation: Cambridge University
Talk Title: Connectomics on the Fly
Abstract: The fruit fly Drosophila melanogaster is an excellent model organism for the study of neural circuits ranging from sensory processing all the way to motor control. With collaborators we have recently completed multiple dense, synapse-resolution connectomes of the fly’s brain and nerve cord. This provides templates to build models, generate hypotheses and design experiments. This talk will introduce these new datasets and show how they can be connected to other information such as neurotransmitter identity. We will present lessons learned for interpreting connectomic data by comparing connectomes of different fly brains. Finally, we will discuss how these resources are being used to build circuit models.
Affiliation: Department of Biomedical Engineering, Boston University
Talk Title: Learning the Language of Smell: Foundation Models for Protein-Odor Interactions
Abstract: Foundation models are large neural networks pre-trained on unlabeled data to learn rich, generalizable embeddings that enhance performance on downstream tasks. These models are particularly valuable when data is limited, where they mitigate overfitting and improve generalization. There has been significant progress in the development and application of foundation models to problems in biochemistry but these insights have largely been overlooked in neuroscience.
In this talk, I will discuss recent efforts by my lab to adapt biochemical foundation models to problems in olfaction. Several aspects of olfactory neuroscience support their use. Olfactory datasets that record how an odorant interacts with olfactory receptors (ORs) are small, on the order of hundreds of odorants, which stands in stark contrast to the hypothesized billions of odorants that exist. This discrepancy suggests chemical and/or protein foundation models, trained on millions of unlabeled chemicals or proteins respectively, can aid prediction of which odorants and ORs will pair.
We adapted several chemical and protein foundation models to the task of odorant-OR pair prediction across three datasets from different species, leading to three main findings. First, molecular information alone was insufficient to accurately predict olfactory receptor neuron activity, suggesting that individual neuron selectivity cannot be captured from molecular features alone. Second, integrating protein embeddings from protein foundation models drastically increased performance, suggesting that multimodal models, which integrate chemical and protein data, are critical for accurate predictions. Third, although we applied several chemical foundation models, no single model achieved superior performance, suggesting additional improvements in self-supervised methods for constructing chemical foundation models could lead to improvements on this task. I will describe approaches we are taking to provide greater chemical context in chemical foundation models to drive improvements in odorant-OR prediction.
Affiliation: University of Southern California
Talk Title: Dynamical models of neural-behavioral data with application to AI-driven neurotechnology
Abstract: A major challenge in neuroAI is to model, decode, and regulate the activity of large populations of neurons that underlie our brain’s functions and dysfunctions. Toward addressing this challenge, I will present our work on novel dynamical models of neural-behavioral data and applying them to enable a new generation of brain-computer interfaces for disorders such as major depression. First, I will present a novel dynamical modeling framework that jointly describes neural-behavioral data, dissociates behaviorally relevant neural dynamics, and learns them more accurately. Then, I will show how we can also predict the effect of inputs, such as sensory stimuli or neurostimulation, to dissociate intrinsic and input-driven neural dynamics. I further present how these models can incorporate multiple spatiotemporal scales of brain activity simultaneously. Finally, I will discuss the challenge of developing AI algorithms for real-time neurotechnology. I will present a framework that combines neural networks with stochastic state-space models to enable accurate yet flexible inference of brain states causally, non-causally, and even with missing neural samples. The above dynamical models can enable next-generation AI-driven neurotechnologies that restore lost motor and emotional function in diverse brain disorders such as paralysis and major depression.
To learn more about Maryam and their research, click here.