Skip to main content
http://Jaume%20Plensa's%20Mirall%20sculpture%20outside%20Allen%20Institute%20HQ%20in%20Seattle.

Michael Buice, Ph.D.

Associate Investigator

Bio:

Michael Buice is a member of the modeling, analysis, and theory team at the Allen Institute, where he explores the implications of theories of neural processing and contributes to mathematical and data analysis. Before arriving at the Allen Institute, Buice worked in the lab of Ila Fiete at the University of Texas at Austin, where he helped derive a system size expansion for the Fisher Information for sensory and working memory systems, and developed analytic expressions for the fluctuations in attractor network models of neural networks. He held a postdoctoral research position in Carson Chow's group at the Laboratory of Biological Modeling at the National Institutes of Health (NIH). There, Buice applied kinetic theory and density functional theory to oscillator models of neural networks, answering open questions regarding the stability of asynchronous firing states in networks of finite size, a dynamical phenomenon related to the information present in the network. In addition, Buice helped construct a method for deriving equivalent reduced stochastic equations for systems with "incomplete information", such as an interacting network of neurons in which only a few neurons are actually recorded. Buice earned a Ph.D. in physics from the University of Chicago working with Jack Cowan to adapt techniques from the analysis of reaction-diffusion systems in physics to the statistics of simple models of neural networks.

Research Focus:


Perception is an ill-posed problem. Many of the sights and sounds we perceive on a regular basis are ambiguous, yet we regularly identify objects consistently by sight from a variety of angles, in multiple contexts, and even when partially occluded or in poor lighting conditions. Because of this ambiguity, perception is necessarily an act of inference, as recognized by Helmholtz in the 19th century, which combines prior knowledge with data to produce estimates about characteristics of the physical world. The neural systems that govern perception must therefore encode this prior knowledge and provide a mechanism for incorporating data from low-level sensory systems. My research interests are in identifying and understanding the mechanisms and principles that the nervous system uses to perform the inferences which allow us to perceive the world. I am particularly interested in neural implementations of Bayesian inference and mechanisms by which prior knowledge is encoded as well as the implications that coding efficiency has on the structure of neural circuits. I also wish to understand how network structure relates to network activity and how that activity corresponds to the statistics of stimuli. An important component of this endeavor is understanding the characteristics of stimuli that perceptual systems evolved to efficiently interpret, how those characteristics are represented in cortex, and how (and to what extent) they can be decoded.





Expertise




  • Theoretical & computational neuroscience

  • Statistical mechanics of neural networks

  • Machine learning


Research Programs




  • Neural coding