AI voices trick humans, but brain responses differ – Neuroscience News


Summary: People have difficulty distinguishing between human and AI-generated voices, and only correctly identify them about half the time. Despite this, brain scans revealed different neural responses to human and AI voices, with human voices triggering areas related to memory and empathy, while AI voices activated regions for error detection and regulation. attention.

These findings highlight both the challenges and potential of advanced AI voice technology. Further research will explore how personality traits affect the ability to discern the origins of voices.

Highlights:

  1. Identification struggles: Participants correctly identified human voices in 56% of cases, AI voices in 50.5%.
  2. Neural responses: Human voices activate areas of memory and empathy; AI voices triggered error detection and attention regulation.
  3. Perception bias: Neutral voices were often perceived as AI, while happy voices were perceived as human.

Source: FENES

People aren’t very good at distinguishing between human voices and voices generated by artificial intelligence (AI), but our brains respond differently to human voices and AI, according to a study presented today ( Tuesday) at the European Federation of Neuroscience Societies (FENS). Forum 2024.

The study was presented by doctoral student Christine Skjegstad and carried out by Ms Skjegstad and Professor Sascha Frühholz, both from the Department of Psychology at the University of Oslo (UiO), Norway.

Ms Skjegstad said: “We already know that AI-generated voices have become so advanced that they are almost indistinguishable from real human voices. It is now possible to clone a person’s voice from just a few seconds of recording, and scammers have used the technology to imitate a loved one in distress and trick victims into transferring money.

“While machine learning experts are developing technological solutions to detect AI voices, much less is known about the human brain’s response to these voices.”

This shows the outline of two heads.
For happy human voices, the correct identification rate was 78%, compared to just 32% for happy AI voices, suggesting that people associate happiness with a more human feeling. Credit: Neuroscience News

The research involved 43 people who were asked to listen to human and AI-generated voices expressing five different emotions: neutral, anger, fear, joy, pleasure. They were asked to identify voices as synthetic or natural while their brains were studied using functional magnetic resonance imaging (fMRI).

fMRI is used to detect changes in blood flow in the brain, indicating which parts of the brain are active. Participants were also asked to rate the characteristics of the voices they heard in terms of naturalness, reliability, and authenticity.

Participants correctly identified human voices only 56% of the time and artificial voices 50.5% of the time, meaning they were equally bad at identifying both types of voices.

People were more likely to correctly identify a “neutral” AI voice as AI (75% vs. 23% who could correctly identify a neutral human voice as human), suggesting that people assume that neutral voices sound more like AI.

AI-neutral female voices were correctly identified more often than AI-neutral male voices. For happy human voices, the correct identification rate was 78%, compared to just 32% for happy AI voices, suggesting that people associate happiness with a more human feeling.

Neutral AI and human voices were perceived as the least natural, trustworthy, and authentic, while happy human voices were perceived as the most natural, trustworthy, and authentic.

However, by looking at brain imaging, researchers found that human voices caused stronger responses in areas of the brain associated with memory (right hippocampus) and empathy (right inferior frontal gyrus).

AI voices elicited stronger responses in areas related to error detection (right anterior middle cingulate cortex) and attention regulation (right dorsolateral prefrontal cortex).

Ms Skjegstad said: “My research indicates that we are not very accurate in identifying whether a voice is human or AI-generated. Participants also often expressed how difficult it was for them to differentiate between voices. This suggests that current AI voice technology can imitate human voices to such an extent that it is difficult for people to reliably distinguish them.

“The results also indicate a perception bias whereby neutral voices were more likely to be identified as AI-generated and happy voices were more likely to be identified as more human, whether or not they were actually generated.” This was especially the case for neutral female AI voices, perhaps because we are familiar with female voice assistants such as Siri and Alexa.

“Although we’re not very good at identifying human voices from AI voices, there seems to be a difference in the brain’s response. AI voices can elicit heightened alertness, while human voices can elicit a sense of relatedness.

The researchers now plan to study whether personality traits, for example extroversion or empathy, make people more or less sensitive to differences between human and AI voices.

Professor Richard Roche is Chairman of the FENS Forum Communications Committee and Deputy Head of the Department of Psychology at Maynooth University, Maynooth, County Kildare, Ireland, and was not involved in the research.

He said: “Studying the brain’s responses to AI voices is crucial as this technology continues to advance. This research will help us understand the potential cognitive and social implications of AI voice technology, which could support policy and ethical guidelines.

“The risks of using this technology to scam and deceive people are clear. However, there are also potential benefits, such as voice replacement for people who have lost their natural voice. AI voices could also be used in therapy for certain mental health conditions.

About this research news in AI and neuroscience

Author: Kerry Noble
Source: FENES
Contact: Kerry Noble – FENS
Picture: Image is credited to Neuroscience News

Original research: The results will be presented at the FENS Forum 2024



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top