Skip to main content

Virtual reality is helping neuroscientists better understand the brain

When Elise Shen goes to work, her morning starts much like that of anyone else with a desk job. She greets her coworkers. She fetches a cup of coffee. She catches up on email.

08.02.2019

3 min read

Then she puts on her VR headset.

Shen, a neuroanatomy specialist at the Allen Institute, spends her days at her desk carefully tracing the delicate branching filaments of mouse neurons. It’s part of a team effort to better understand neurons — and how they work in the brain — by capturing their precise 3D shapes, also known as their morphology.

Researchers sometimes do work like this on a flat computer screen, attempting to capture a 3D shape in 2D. It doesn’t always work, said Hanchuan Peng, Ph.D., a computer scientist at the Allen Institute for Brain Science, a division of the Allen Institute. It can be hard to see depth in that view, hard to accurately trace the hundreds or thousands of tiny tendrils that branch off each cell and connect to others in the brain.

Researchers at the Allen Institute for Brain Science and Southeast University in Nanjing, China, recently devised a way to use virtual reality to help neuroscientists like Shen — by literally immersing them in the middle of an image of the brain (or part of it).

“If you really want to do this, you need to put the annotator, the human observer, in the middle of the data,” Peng said. “Most conventional tools that people use, you’re observing from outside, like you’re looking through a window.”

The researchers describe the new open-source technology, which they dubbed TeraVR, in a study published today in the journal Nature Communications. The method was developed at the SEU-Allen Joint Center for Neuron Morphology, a collaboration between the Allen Institute and Southeast University, and led by Peng and his team member, Yimin Wang, Ph.D., a computational researcher at Shanghai University and the SEU-Allen Center.

A virtual collaboration

Allen Institute and SEU-Allen Center researchers are using TeraVR to reconstruct mouse neurons from whole-brain images. Other research teams at the Allen Institute and elsewhere reconstruct neurons from thin slices of the brain, but some kinds of neurons send axon connections long distances in the brain, connections that would be missed in smaller segments.

The technology merges artificial intelligence with manual annotation — hand-tracing the neurons’ shape, which Shen describes as similar to coloring. The AI part of TeraVR captures part of the cells’ shape automatically, but so far, a human eye is still needed to accurately reconstruct neurons in their entirety.

The software also uses cloud computing to allow researchers to work on the same neuron at the same time. Neuron annotators in China and in Seattle can collaborate in virtual space to capture a cell’s shape more quickly. Depending on the complexity of the neuron and the strength of the fluorescent label, it can take anywhere from a few days to over a month for one person to reconstruct an entire cell. But this is much faster than previous methods allowed, Peng said.

The researchers also hope the technology could prove useful outside of neuron reconstruction. TeraVR can handle very large datasets: The whole-brain imaging datasets are tens of terabytes large (the origin of TeraVR’s name), so the system could have some use in other contexts where precision is required based on visual information, such as robotic surgery.

The system has been in use at the Allen Institute and at the collaborating sites in China for about a year. Together, the research teams have reconstructed more than 1,000 complete neurons in that time.

“It’s really exciting how quickly people can adapt to this technology and put it to good use,” Peng said.