Summary: Today’s AI can read, speak, and analyze data, but still has critical limitations. NeuroAI researchers have designed a new AI model inspired by the efficiency of the human brain.
This model allows AI neurons to receive feedback and adjust in real time, improving learning and memory processes. This innovation could lead to a new generation of AI that is more efficient and accessible, bringing AI and neuroscience closer together.
Highlights:
- Inspired by the brain: The new AI model is based on how the human brain efficiently processes and adjusts data.
- Real-time adjustment: AI neurons can receive feedback and adjust on the fly, improving efficiency.
- Potential impact: This breakthrough could give rise to a new generation of AI that learns like humans, thereby improving the fields of AI and neuroscience.
Source: CSHL
It reads. It talks about. It gathers mountains of data and recommends business decisions. Today’s artificial intelligence seems more human than ever. However, AI still has several critical flaws.
“As impressive as ChatGPT and all these current AI technologies are, in terms of interacting with the physical world, they are still very limited. Even in the things they do, like solving math problems and writing essays, they take billions and billions of training examples before they can do them correctly,” says Kyle Daruwalla, NeuroAI researcher at Cold Spring Harbor Laboratory (CSHL).
Daruwalla is looking for new, unconventional ways to design AI that can overcome such computational hurdles. And he may have just found one.
The key was to move the data. Nowadays, most of the energy consumption of modern computing comes from data bouncing. In artificial neural networks, made up of billions of connections, data can have a very long path to travel.
So, to find a solution, Daruwalla took inspiration from one of the most computationally powerful and energy-efficient machines there is: the human brain.
Daruwalla designed a new way for AI algorithms to move and process data much more efficiently, based on how our brains integrate new information. The design allows individual AI “neurons” to receive feedback and adjust on the fly rather than waiting for an entire circuit to update simultaneously. This way, the data does not have to travel as far and is processed in real time.
“In our brain, our connections are constantly changing and adjusting,” says Daruwalla. “It’s not like you pause everything, adjust, and then get back to your life.”
The new machine learning model provides evidence for a previously unproven theory that correlates working memory with learning and academic performance. Working memory is the cognitive system that allows us to stay focused on our task while recalling stored knowledge and experiences.
“There are theories in neuroscience about how functional memory circuits might facilitate learning. But there is nothing as concrete as our rule that actually links these two elements.
“And so that’s one of the beautiful things that we stumbled upon here.” The theory led to a rule that the individual adjustment of each synapse required this working memory to be placed next to it,” Daruwalla explains.
Daruwalla’s design could help usher in a new generation of AI that learns like us. Not only would this make AI more effective and accessible, but it would also somewhat bring neuroAI full circle. Neuroscience was feeding valuable data to AI long before ChatGPT uttered its first digital syllable. Soon, it seems, AI may return the favor.
About this news from artificial intelligence research
Author: Sarah Giarnieri
Source: CSHL
Contact: Sara Giarnieri – CSHL
Picture: Image is credited to Neuroscience News
Original research: Free access.
“The Hebbian learning rule based on information bottlenecks naturally links working memory and synaptic updates” by Kyle Daruwalla et al. Frontiers of Computational Neuroscience
Abstract
The Hebbian learning rule based on information bottlenecks naturally links working memory and synaptic updates
Feed-forward deep neural networks are effective models for a wide range of problems, but there is a significant energy cost to training and deploying such networks. Spiking neural networks (SNNs), modeled after biologically realistic neurons, offer a potential solution when deployed properly on neuromorphic computing hardware.
However, many applications train SNNs offline, and performing network training directly on neuromorphic hardware is an ongoing research problem. The main obstacle is that backpropagation, which makes the formation of such deep artificial networks possible, is biologically implausible.
Neuroscientists don’t know exactly how the brain would propagate a precise error signal backward through a network of neurons. Recent advances address part of this question, for example the weight carrying problem, but a complete solution remains elusive.
In contrast, new learning rules based on information bottleneck (IB) train each layer of a network independently, thereby avoiding the need to propagate errors between layers. Instead, propagation is implied due to the anticipated connectivity of the layers.
These rules take the form of a three-factor Hebbian update: a global error signal modulates local synaptic updates within each layer. Unfortunately, the overall signal from a given layer requires multiple samples to be processed simultaneously, and the brain only sees one sample at a time.
We propose a novel three-factor update rule in which the overall signal correctly captures sample information through an auxiliary memory network. The auxiliary network can be formed First of all regardless of the dataset used with the main network.
We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation schemes where there is no link between learning and memory, our rule presents a direct link between working memory and synaptic updates. To our knowledge, this is the first rule that makes this link explicit.
We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternative view of learning in which each layer balances memory-based compression and task performance.
This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.