Image of the brain (photo by digitalbob8, flickr.com)

Baycrest’s amazing Virtual Brain

Randy McIntosh is helping us understand the mysteries inside our heads

Randy McIntosh’s brain isn’t very smart. It’s about as astute as your average three-year-old. But it’s getting smarter every day.

Housed in a supercomputing data centre, this “brain” is actually a model created by the Brain Network Recovery Group (Brain NRG), a consortium of 16 universities. McIntosh, a psychology professor at U of T, vice-president of research at Baycrest and director of Baycrest’s Rotman Research Institute, helped found Brain NRG.

Much of neuroscience has focused on brain structure, seeking to map the architecture of the brain and to understand the connections between brain areas. With advances in biophysics over the past few decades, scientists have been able to measure brain function in living humans and animals using sophisticated neuroimaging techniques, moving toward an understanding of how stimuli are processed by the brain. Think of brain structure like a picture and brain function like a movie.

But so far the “movie” has been limited to single studies — a test subject gets an fMRI as part of a research study and the result is information about how that person’s brain responds to, say, listening to a certain piece of music.

McIntosh’s “Virtual Brain” is one of the first projects to model brain structure and function simultaneously. It is a fluid, responsive model that can show how the brain responds to anything from looking at a picture to undergoing a stroke.

“It’s like an athlete,” says McIntosh. “You can look at an athlete’s body standing there, but it’s hard to appreciate what they can do until they actually do it. With the brain, you can see the pipelines, the wires that connect certain brain regions, but until the brain is actually engaged, you can’t really appreciate how structure enables and constrains function.”

Recent years have seen an explosion of data collected on brain structure and function. McIntosh’s group is collating these data and using them to create and refine their model. 

They started with a model of a monkey brain. It produced the same patterns of activity seen in functional neuroimaging studies of real monkeys, effectively validating their methods. They then modeled brain structure and function of a human child and were able to “age” the brain a few years. The ultimate goal is to create an prototypical adult human brain.

The applications are potentially game-changing. Imagine physicians being able to input the architecture of a stroke patient into the model. The Virtual Brain would allow them to test potential treatments, including learning about how the patient’s brain could rewire itself.

“There are a number of ways of getting from point A to point B in the brain,” says McIntosh. “Can we take a patient’s brain and facilitate recovery using new pathways?”

The Virtual Brain will also help scientists understand aging. “We know that in aging there are multiple changes in the white matter in the brain. These changes have very few consequences in some people, but in others they have devastating consequences. What is it that makes one person resilient and another person not?”

Aside from prognostic innovations, the Virtual Brain could revolutionize basic research.  “It’s become a virtual lab. Things you can’t do experimentally in a human brain you can do in the Virtual Brain, just to see what happens.”

Surprisingly, the biggest hurdle in getting the project going wasn’t scientific, but cultural. Creating the Virtual Brain required computer scientists to write the programs, neuroscientists who understand cognitive processes, and clinicians who have access to patients. Historically these groups don’t have much in common, or much reason to collaborate. Says McIntosh, “We started speaking the same language and working toward the same goal, and all of a sudden this huge innovation came about.”


 

The Bulletin Brief logo

Subscribe to The Bulletin Brief