U of T neuroscientist on how advances in AI may help us better understand why neurons are shaped the way they are

Photo of Blake Richards
U of T Assistant Professor Blake Richards (photo by Ken Jones). The researcher is wearing safety attire appropriate for working in the lab shown above – learn more about safety measures at U of T labs (https://ehs.utoronto.ca/home/i-work-in-a-lab/)

The shape of our neurons may indicate our brains actually employ a type of learning, dubbed “deep learning,” that was developed to drive artificial intelligence, or AI, applications, a University of Toronto researcher has found.

While AI has made major leaps in recent years thanks to a technique that attempts to mimic our brain with a simulated, layered network of neurons, the approach contradicts a number of known biological facts about the brain. For one, deep learning in AI uses knowledge of all the network connections when learning, while our own brains can only use whatever information is available to an individual cell.

The neurons in our brain are also complex: Most in the neocortex, for example, are shaped like trees with “roots” deep in the brain and “branches” close to the surface. By comparison, the simulated neurons used in AI are very simple with no shape, just a single point in space.

As a result, many neuroscientists are skeptical that our brains really use deep learning.

But Assistant Professor Blake Richards, a neuroscientist at U of T Scarborough, says new research suggests the AI approach may be closer to reality than previously thought.

“The shape of our neurons may well be adapted to use for the sort of deep learning algorithms used in artificial intelligence,” says Richards, a senior author of the research.

For the study, Richards and his team wanted to see whether deep learning could be done in a biologically realistic manner using simulated neurons that are more like the real ones found in the neocortex, the part of the brain responsible for higher level thought. To test the idea, they ran a simulation of neurons with two separate compartments – one for the “roots” and the other for the “branches.”

Interested in publicly funded research in Canada? Learn more at UofT’s #supportthereport advocacy campaign

They first developed a method for training the network so it wouldn’t violate any major biological principles and could still take advantage of the multiple layers in the network by using the two separate compartments. They then trained it to recognize images of hand-written digits.

What they found was learning could be improved by adding more layers to the network, a sign of deep learning. “This shows that deep learning is possible using more realistic neurons, and it also suggests that neurons in the neocortex may have evolved their shape in order to do deep learning,” says Richards.

If learning about how some aspects of the brain work by running AI simulations sounds like the stuff of science fiction, it shouldn’t, says Richards. He points to recent research that shows patterns of activity in simulated neural networks match those seen in human brains using functional MRI scans.

“What’s interesting about the current state of research is that AI researchers have figured out how to show a human level of skill in a number of different ways. This leaves neuroscientists trying to figure out what principles these models are capturing because presumably on some level the brain may be doing something similar.”

Richards says we may be at a point where there’s a virtuous cycle between AI and neuroscience in the coming years, where discoveries in one may lead to discoveries in the other and vice versa. “There are many AI researchers and neuroscientists who reject this idea, but there are many like myself who think there could be a parallel growth in both AI and neuroscience,” he says.

If researchers can unravel how the human brain does deep learning, it could help develop better, more human-like AI in the future. But Richards says it’s important to note this relationship likely won’t go on forever. When it comes to learning more about general intelligence and developing better AI, he uses the analogy of flight. If you know the principles of aerodynamics, it can tell you how birds are able to fly and also help build better planes. But when you hit a big leap forward, like developing a jet engine, the lessons about how birds can fly start to become meaningless.

“We may get to a point where we develop the equivalent of the AI jet engine that’s totally different from what our brain does, a form of super-intelligent AI that’s beyond anything we can imagine,” he says. “At this point we’re probably still at the Wright brothers' stage, where it still makes sense to consider how birds are able to fly.”  

The research, which will be published in the journal eLife, was supported by a discovery grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) and a Google Faculty Research Award. It also received funding from the Learning in Machines and Brains Program that’s part of the Canadian Institute for Advanced Research, of which Richards is an associate fellow.

 

The Bulletin Brief logo

Subscribe to The Bulletin Brief

UTSC