From York University 25/11/23

Created by Superinnovators in harmony in AI

Faced with images that break the expected pattern, like a do not enter sign where a stop sign is expected, how does the brain react and learn compared to being shown images which match what was predicted?

That was the question a team, including York University, set out to answer.

A long-standing theory suggests the brain learns a predictive model of the world and its internal predictions are updated when incoming sensory data proves them wrong.

However, what the researchers found surprised them, says York Faculty of Science Associate Professor Joel Zylberberg, co-corresponding author of the newly published paper.

“Testing this theory has always been a challenge,” he says.

“We needed to be able to measure the top-down signals to the sensory areas of the brain over long periods of time to show how the brain learns new sensory input patterns.”

Created by Superinnovators in harmony in AI

Using a mouse model, the researchers displayed images of visual patterns over multiple days, then presented other images that violated those patterns, while measuring the brain’s activity in the visual cortex, where visual information from the retina is processed.

The idea was to test how the neurons reacted to the new pattern-violating sensory information.

Several of the researchers, including Zylberberg, are Fellows in the Canadian Institute for Advanced Research’s Learning in Machines and Brains group, which conducted the research as part of the Allen Institute for Brain Science’s Brain Observatory and its OpenScope program.

OpenScope has been compared to an observatory where astronomers work together to study the universe, only this time researchers are sharing data to study the brain.

The measurements were taken at the neurons’ distal apical dendrites of the visual cortex, which receive top-down signals, and at their cell bodies, which receive bottom-up signals.

They wanted to know if the distal apical dendrites processed visual stimuli differently from their cell bodies when the signals both matched and violated expected patterns.

It turns out, the brain’s response to image patterns that violate the brain’s predictions, evolves differently over time when compared to pattern-matching images.

Credit: Getty

“Surprisingly, the distal apical dendrites responses grew significantly over time becoming increasingly sensitive to inputs that violate the patterns, while the cell bodies lost their initially strong sensitivity,” says Zylberberg, a computational neuroscientist.

“This finding could offer critical insight into sensory computation and predictive learning in the brain.”

The finding suggests that the pattern-violating stimuli drove the changes and different forms of pattern-violating stimuli may elicit different kinds of prediction errors than expected.

It points to a component of the brain that could have a distinct and important role in sensory learning not previously known.

“Knowing how the brain processes new visual sensory information is important for developing better machine learning algorithms and applications which could hopefully help restore people’s sight in the future,” says Zylberberg.

More info

Paper

You may also be curious about:

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our weekly newsletter

Recieve the latest innovation, emerging tech, research, science and engineering news from Superinnovators.