Scientists are creating virtual simulations of the brain to better understand the real thing
March 11, 2019
Computational neuroscientists at the Allen Institute are building hundreds of individual models of neurons that can be stacked, Lego-like, into a larger recreation to simulate how the brain works.
When asked to describe what he works on, computational neuroscientist Stefan Mihalas, Ph.D., covers one of his eyes with his hand.
“Pretend I walked into the room like this. Could you tell what’s behind my hand?” he asks.
It’s not a trick question. But for the viewer to arrive at the obvious answer — Mihalas’ right eye — requires a sophisticated amount of brainpower.
The following is by no means a complete list: Your brain recognizes the shades of light and dark in front of your eyes. It ignores the 90 percent of the visual field that’s not relevant to what you want to focus on. It assembles that visual information into the category “human face.” It sifts through the stored memories of human faces you’ve seen before. It calls up from that generalization that the majority of human faces have two symmetrically placed eyes. And it does it all in a split second, without you realizing the sheer processing power going on in your head.
The ability to infer what’s missing in a visual scene is not unique to humans — many other animals can do it too — but we are far from understanding how the brain manages those complex computations.
That’s what Mihalas wants to figure out. To do so, he and his computational neuroscientist colleagues at the Allen Institute build models, or virtual recreations, of the brain (or parts of it). Their goal is not just to understand how we make educated guesses about unseen objects, but to uncover the brain’s more basic principles, using the mammalian visual system as a starting point.
“At its most basic level, modeling is creating a concept of how something works,” said Corinne Teeter, Ph.D., also a computational neuroscientist at the Allen Institute for Brain Science, a division of the Allen Institute. “We formalize that concept in the language of mathematics so we can test it.”
Because the brain is so complicated, those mathematical models necessitate sophisticated computer programs to build and execute.
“If you take a simple system like a cannonball flying out of a cannon, you can predict what will happen with pen and paper and a few equations,” said Anton Arkhipov, Ph.D., a computational neuroscientist at the Allen Institute for Brain Science. “For a complex system like the brain, the computer has to replace pen and paper. Down the road, we hope this approach will help us make predictions that inform experiments and yield insight into the mechanisms of brain diseases. For example, once you have a realistic model, you may be able to predict how a brain circuit malfunctions in disease and how it responds to potential treatments.”
Building blocks of the virtual brain
Allen Institute computational neuroscientists Corinne Teeter, Ph.D., and Nathan Gouwens, Ph.D., discuss their work and what it means to model the brain.
Mihalas and his colleagues’ attempts to understand the brain’s ability to infer missing pieces of a visual scene rely on what are known as top-down models, where the researchers focus on recreating the brain’s behavior in the computer. In this case, the behavior is complicated enough that it requires several different models to recreate. Arkhipov leads a team that builds bottom-up models, where the researchers recreate the brain’s individual neurons computationally and then piece those virtual cells together like building blocks to simulate a larger part of the brain.
Those Lego-like models are all built from data gathered in-house, which is not always the norm in neuroscience research.
“At the Allen Institute, modeling was intended to be strongly interweaved in everything,” said computational neuroscientist Nathan Gouwens, Ph.D., who works on the building-block models. “It’s rare to find it so intentionally integrated.”
Different experimental research teams at the Allen Institute collect information about the brain cells’ precise 3D shape, how and when they fire electrical signals, and how they switch genes on and off, data that all feeds into the Allen Cell Types Database. Arkhipov, Gouwens and their colleagues stitch that different data together into comprehensive, virtual recreations of the neurons.
“Data is not knowledge,” Arkhipov said. “Models are not knowledge either, but they can help you get closer to knowledge by combining and integrating the data.”
Once the individual neurons are built in lines of code, the modelers stack those cellular recreations together to build realistic simulations of circuits, the brain’s information highways that are made of a series of interconnected neurons. Last November, the team published a study in the journal PLOS Computational Biology describing their first such “bio-realistic” circuit model, which combines 45,000 virtual neurons to recreate one specific layer of the mouse primary visual cortex, the largest section of the visual part of the brain.
The individual neuron models, the circuit model and all the tools the researchers used to create them are publicly available online. A significant amount of the time the research teams put into developing these models actually went into creating a new software suite, led by Allen Institute for Brain Science software engineer Kael Dai, and file format, which was developed in collaboration with researchers at the Swiss Blue Brain Project, to allow others in the neuroscience community to create and share their own models.
The researchers have also since expanded that circuit model to include about 230,000 neuron building blocks, now representing all the layers of the primary visual cortex. The diversity of various types of neurons in this part of the brain is described using over 100 different neuron models, each repeated thousands of times.
They’ve since used the model circuits to make predictions about how and why the brain is wired the way it is. In one case, they’re looking at the neurons that allow mice (and likely us) to detect specific directions of motion. Different neurons fire when something moves from our left to our right rather than right to left or up and down.
If a software engineer were designing a program to recognize a specific direction of motion — say, for a self-driving car — there might be a number of different ways to build that function into a computer, said Yazan Billeh, Ph.D., a computational neuroscientist at the Allen Institute for Brain Science who works on the circuit models. “But the question is, how does biology do it? That’s when modeling comes in,” he said.
A visualization of a large-scale model of a mouse visual cortex brain circuit, built at the Allen Institute and containing 230,000 neuron building-block models. Image courtesy of Sergey Gratiy, Ph.D.
Levels of abstraction
When you’re figuring out which details to include and which to leave out of a model of the brain, it helps to know the type of question you want your model to answer.
“Say you were trying to figure out how fast a car can drive,” Teeter said. “If you consider all the details of the steering wheel and the window tint and the seat upholstery instead of the engine power and the car’s weight, you’d be focusing on the wrong things.”
Teeter and Mihalas worked together on a team that built simpler recreations of neurons, models that ignore the neurons’ branching tree-like shapes and represent them as single points in space. To save time and processing power, about three-quarters of the large circuit models are made of these simpler building blocks.
Those simpler models might be perfectly capable building blocks for models which focus on reproducing the activities of neurons in a network and the computations they implement. In fact, the researchers found that the circuits built with the point-in-space neuron models behaved very similarly in their simulations of brain activity to the circuits built with the bio-realistic neuron models. But if you wanted to use a computational model to, say, predict how the brain might react to a new drug that interacts with one specific protein on a neuron’s surface, you’d need the more detailed versions.
“In neuroscience, we need theories at many different levels: at a high level to help us to understand the algorithms the brain is using to compute, but also at low levels, so we understand how the biophysical properties of the brain produce these computations,” said Adrienne Fairhall, Ph.D., a Professor of Physiology and Biophysics at the University of Washington who studies neuronal circuitry and theories of computation. “Building powerful biophysical level models that accumulate information from many experimental studies and that serve as a community resource is simply beyond the scope of most individual labs. This is a great target for team science efforts like those at the Allen Institute.”
Servers in the Allen Institute's dedicated server room
Neuroscience data and models can also help build better computer programs. The first wave of artificial intelligence used precise, logical rules, said Michael Buice, Ph.D., a computational theorist at the Allen Institute for Brain Science. But the more we learn about the human brain, the more we realize how imprecise it is.
Say you’re looking at a table. Your brain’s rules about what is a table and what isn’t aren’t very well defined, Buice said. “There’s sort of a space of table-ness in your brain. And it’s because all of us have this fuzzy space of what a table is that we can more or less agree on, it allows us to functionally get by,” he said. When computer scientists mimic that fuzziness through machine learning, making the software more neural-like, the programs get much closer to and often surpass human levels of performance.
Buice and his team work with data from the Allen Brain Observatory, a large-scale experimental platform where researchers study brain cells firing in real-time as mice watch different images and movies. They want to gather enough data to be able to predict which neurons will respond to a specific image, and eventually to be able to look at a pattern of brain activity and predict the precise image that spurred those specific neurons to fire.
It’s a sci-fi level of mind-reading that turns out to be ridiculously hard, because the visual processing part of the brain is even fuzzier than the researchers first thought. But if they’re successful in modeling neurons as they are actually behaving in a living animal — together with the models of neurons at their more detailed levels — that would be a big step for computational neuroscience. “That would teach us what is actually driving the cells,” Buice said.
“We’re really far off from a working model of the entire brain,” Teeter said. “Many years down the road, we’ll hopefully be able to merge all of this together into a global idea of how the brain works.”
Get the latest news from the Allen Institute.