Revolutionizing neuroscience: Stanford AI reflects brain organization


Human brain artificial intelligence drawing

Stanford’s Wu Tsai Neuroscience Institute has developed an AI model called a topographic deep artificial neural network (TDANN) that mimics the brain’s organization of visual information. This model, which uses naturalistic data and spatial constraints, has successfully reproduced functional maps of the brain and could have a significant impact on neuroscience research and artificial intelligence. The findings, published after seven years of research, highlight the potential for more energy-efficient AI and enhanced virtual neuroscience experiments that could revolutionize medical treatments and AI’s visual processing capabilities.

Stanford researchers have developed AI that mimics brain responses to visual stimuli, potentially transforming the development of neuroscience and AI with implications for energy efficiency and medical advancements.

A team from Stanford’s Wu Tsai Neuroscience Institute has made a significant breakthrough using AI to mimic how the brain processes sensory information to understand the world, paving the way for advances in virtual neuroscience.

Watch the seconds tick by on a clock, and in the visual regions of your brain, neighboring groups of angle-selective neurons will fire in sequence as the second hand moves around the clock face. These cells form beautiful “pinwheel” maps, with each segment representing a visual perception from a different angle. Other visual areas of the brain contain maps of more complex and abstract visual features, such as distinguishing between images of familiar faces and those of places, which activate distinct neural “neighborhoods.”

Such functional maps can be found throughout the brain, both delightful and baffling to neuroscientists, who have long wondered why the brain should have evolved a map-like layout that only modern science can observe.

To answer this question, the Stanford team developed a new type of AI algorithm – a topographic deep artificial neural network (TDANN) – that uses just two rules: naturalistic sensory inputs and spatial constraints on connections ; and found that it successfully predicts both sensory responses and the spatial organization of several parts of the human brain’s visual system.

Seven years of research culminates in a publication

After seven years of extensive research, the results were published in a new article — “A unifying framework for functional organization in early and higher ventral visual cortex” — in the journal Neuron.

The research team was led by Wu Tsai Neuroscience Institute professor Dan Yamins, assistant professor of psychology and computer science; And

Kalanit Grill-Spector, affiliated with the Institute, professor of psychology.

Unlike conventional neural networks, TDANN incorporates spatial constraints, arranging its virtual neurons on a two-dimensional “cortical sheet” and requiring that nearby neurons share similar responses to sensory inputs. As the model learned to process images, this topographical structure caused it to form spatial maps, replicating the way neurons in the brain organize themselves in response to visual stimuli. Specifically, the model reproduced complex patterns such as pinwheel-shaped structures in the primary visual cortex (V1) and groups of neurons in the ventral superior temporal cortex (VTC) that respond to categories such as faces or places.

Eshed Margalit, the study’s lead author, who completed his doctorate in collaboration with Yamins and Grill-Spector, said the team used self-supervised learning approaches to help

precision
To what extent does the measured value conform to the correct value.

” data-gt-translate-attributes=”({“attribute”:”data-cmtooltip”, “format”:”html”})” tabindex=”0″ role=”link”>precision training models that simulate the brain.

“It’s probably more like how babies learn the visual world,” Margalit said. “I don’t think we initially expected it to have such a big impact on the accuracy of the trained models, but you really have to get the task of training the network right for it to be a good model of the brain.”

Implications for neuroscience and AI

The fully trainable model will help neuroscientists better understand the rules of brain organization, whether for vision, as in this study, or for other sensory systems like hearing.

“When the brain tries to learn something about the world — like seeing two snapshots of a person — it places neurons that respond in the same way close together in the brain and maps are formed,” Grill-Spector said. , who is Professor Susan S. and William. Professor H. Hindle in the School of Humanities and Sciences. “We believe that this principle should also be transferable to other systems.”

This innovative approach has significant implications for both neuroscience and artificial intelligence. For neuroscientists, TDANN offers a new perspective for studying how the visual cortex develops and functions, potentially transforming treatments for neurological disorders. For AI, knowledge derived from brain organization can lead to more sophisticated visual processing systems, much like teaching computers to “see” the way humans do.

The findings could also help explain how the human brain operates with such energy efficiency. For example, the human brain can calculate billions of billions of mathematical operations with just 20 watts of power, while a supercomputer requires a million times more energy to perform the same calculations. The new findings highlight that neuronal maps – and the spatial or topographical constraints that determine them – likely serve to keep the wiring connecting the brain’s 100 billion neurons as simple as possible. This knowledge could be key to designing more efficient artificial systems inspired by the elegance of the brain.

“AI is limited by power,” Yamins said. “In the long term, if people knew how to operate artificial systems with much lower energy consumption, this could fuel the development of AI.”

More energy-efficient AI could contribute to the development of virtual neuroscience, where experiments could be carried out more quickly and on a larger scale. In their study, the researchers demonstrated as proof of principle that their topographic deep artificial neural network reproduced brain-like responses to a wide range of naturalistic visual stimuli, suggesting that such systems could, in the future, be used as quick and inexpensive playgrounds. for prototyping neuroscience experiments and quickly identifying hypotheses for future testing.

Virtual neuroscience experiments could also advance human medical care. For example, better training an artificial visual system in the same way a baby visually learns the world could help an AI see the world like a human, where the center of gaze is sharper than the rest of the field of vision. Another application could help develop visual prosthetics or simulate exactly how diseases and injuries affect parts of the brain.

“If you can do things like make predictions that will help develop prosthetics for people who have lost vision, I think that will really be an incredible thing,” Grill-Spector said.

Reference: “A unifying framework for the functional organization of early and higher ventral visual cortex” by Eshed Margalit, Hyodong Lee, Dawn Finzi, James J. DiCarlo, Kalanit Grill-Spector and Daniel LK Yamins, May 10, 2024, Neuron.
DOI: 10.1016/j.neuron.2024.04.018





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top