AI mimics human decision-making for better accuracy – Neuroscience News


Summary: Researchers have developed a neural network that mimics human decision-making by incorporating elements of uncertainty and evidence accumulation. The model, trained on handwritten digits, produces more human-like decisions than traditional neural networks.

It features human-like accuracy, response time, and confidence models. This advancement could lead to more reliable AI systems and reduce the cognitive load of everyday decision-making.

Highlights:

  1. Human decisions: The neural network mimics human uncertainty and evidence accumulation in decision making.
  2. Performance comparison: The model exhibits human-like accuracy and confidence patterns when tested on a noisy dataset.
  3. Future potential: This approach could improve the reliability of AI and help alleviate the cognitive burden of everyday decisions.

Source: Georgia Institute of Technology

Humans make nearly 35,000 decisions every day, from whether it’s safe to cross the street to what to eat for lunch. Each decision involves weighing options, remembering similar scenarios in the past, and being reasonably confident that you’re making the right choice. What may seem like a quick decision is actually the result of gathering data from the environment. And often, the same person makes different decisions in the same scenarios at different times.

Neural networks do the opposite: they make the same decisions every time. Now, Georgia Tech researchers in the lab of Associate Professor Dobromir Rahnev are training them to make decisions that are more like humans.

This shows the outline of a head.
“If we try to make our models more like the human brain, it will show up in the behavior itself without any fine-tuning,” he said. Credit: Neuroscience News

This science of human decision-making is only just being applied to machine learning, but developing a neural network that’s even closer to the real human brain could make it more reliable, researchers say.

In an article by Nature Human Behavior“RTNet neural network exhibits signatures of human perceptual decision making,” a team from the School of Psychology reveals a new neural network trained to make decisions similar to those of humans.

Decoding decision

“Neural networks make a decision without telling you whether they’re confident in their decision,” says Farshad Rafiei, a PhD in psychology at Georgia Tech. “That’s one of the key differences from how people make decisions.”

Large language models (LLMs), for example, are prone to hallucinations. When an LLM is asked a question to which it does not know the answer, it makes something up without recognizing the artifice. In contrast, most humans in the same situation will admit that they do not know the answer. Building a neural network closer to that of humans can prevent this duplicity and lead to more accurate answers.

Making the model

The team trained their neural network on handwritten digits from a famous computer dataset called MNIST and asked it to decipher each number. To determine how accurate the model was, they ran it with the original dataset and then added noise to the digits to make them harder for humans to discern.

To compare the model’s performance with that of humans, they trained their model (along with three other models: CNet, BLNet, and MSDNet) on the original MNIST dataset without noise, but tested them on the noisy version used in the experiments and compared the results from the two datasets.

The researchers’ model relies on two key components: a Bayesian neural network (BNN), which uses probability to make decisions, and an evidence accumulation process that keeps track of evidence for each choice. The BNN produces slightly different answers each time.

As evidence is gathered, the accumulation process can sometimes favor one choice and sometimes another. Once there is enough evidence to decide, the RTNet stops the accumulation process and makes a decision.

The researchers also timed the model’s decision-making speed to see if it follows a psychological phenomenon called the “speed-accuracy tradeoff” that causes humans to be less accurate when they have to make decisions quickly.

Once the model’s results were obtained, they compared them to those obtained by humans. Sixty Georgia Tech students looked at the same data set and shared their confidence in their decisions. The researchers found that the accuracy rate, response time, and confidence patterns were similar between humans and the neural network.

“Typically, we don’t have enough human data in the existing computational literature, so we don’t know how people will behave when exposed to these images. This limitation hampers the development of models that accurately replicate human decision-making,” Rafiei said.

“This work provides one of the largest datasets of humans responding to MNIST.”

Not only did the team’s model outperform all competing deterministic models, it also proved more accurate in high-speed scenarios because of another fundamental element of human psychology: RTNet behaves like humans. For example, people feel more confident when they make the right decisions. Without even having to train the model specifically to foster confidence, the model automatically applied it, Rafiei noted.

“If we try to make our models closer to the human brain, it will show in the behavior itself without fine-tuning,” he said.

The research team hopes to train the neural network on more diverse datasets to test its potential. They also hope to apply this BNN model to other neural networks to enable them to rationalize more like humans.

Eventually, algorithms will not only be able to mimic our decision-making abilities, but could even help alleviate some of the cognitive load of the 35,000 decisions we make every day.

About this news on artificial intelligence research

Author: Tess Malone
Source: Georgia Institute of Technology
Contact: Tess Malone – Georgia Institute of Technology
Picture: Image credited to Neuroscience News

Original research: Access closed.
“RTNet neural network exhibits signatures of human perceptual decision making” by Dobromir Rahnev et al. Nature Human Behavior


Abstract

RTNet neural network exhibits signatures of human perceptual decision making

Convolutional neural networks show promise as models of biological vision. However, their decision behavior, including being deterministic and using equal amounts of computation for easy and difficult stimuli, differs significantly from human decision making, limiting their applicability as models of human perceptual behavior.

Here we develop a novel neural network, RTNet, that generates human-like stochastic decision and response time (RT) distributions. We further perform extensive testing that shows that RTNet replicates all fundamental features of human accuracy, RT, and confidence and does so better than all current alternatives.

To test RTNet’s ability to predict human behavior on novel images, we collected accuracy, reaction time, and confidence data from 60 human participants performing a digit discrimination task. We found that the accuracy, reaction time, and confidence produced by RTNet for individual novel images were correlated with the same quantities produced by human participants.

Importantly, human participants who were closer to average human performance were also found to be closer to RTNet’s predictions, suggesting that RTNet was successful in capturing average human behavior.

Overall, RTNet is a promising model of human RT that exhibits critical signatures of perceptual decision making.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top