Summary: New research shows that people are more likely to accuse others of lies when AI makes the accusation first. This finding highlights the potential social impact of AI in lie detection and urges policymakers to exercise caution. The study found that the presence of AI increased accusation rates and influenced behavior despite people’s general reluctance to use AI lie detection tools.
Highlights:
- AI predictions led to higher rates of false accusations compared to human judgment alone.
- Participants were more likely to accuse statements of being false when the AI indicated so.
- Despite AI’s greater accuracy, only a third of participants chose to use it to detect lies.
Source: Cell Press
Although people lie a lot, they generally refrain from accusing others of lying because of social norms surrounding false accusations and being polite. But artificial intelligence (AI) could soon shake up the rules.
In a study published June 27 in the journal iScienceresearchers show that people are much more likely to accuse others of lying when an AI makes an accusation.
The results provided insight into the social implications of using AI systems for lie detection, which could inform policymakers when implementing similar technologies.
“Our society has strong, well-established norms regarding accusations of lying,” says lead author Nils Köbis, a behavioral scientist at the University of Duisburg-Essen in Germany.
“It would take a lot of courage and evidence to openly accuse others of lying. But our study shows that AI could become an excuse for people to hide behind them, to avoid being held accountable for the consequences of accusations. »
Human society has long operated on the basis of the default theory of truth, which explains that people generally assume that what they hear is true. Because of this tendency to trust others, humans are very bad at detecting lies. Previous research has shown that people perform no better than chance when trying to detect lies.
Köbis and his team wanted to know whether the presence of AI would change social norms and established behaviors around accusation.
To conduct their survey, the team asked 986 people to write a true and a false description of what they planned to do next weekend. The team then trained an algorithm with the data to develop an AI model that could correctly identify true and false statements 66% of the time, far more accurate than the average person can achieve.
The team then recruited more than 2,000 people to act as judges, tasked with reading a statement and deciding whether it was true or false. The researchers divided the participants into four groups: “baseline,” “forced,” “blocked,” and “choice.”
In the base group, participants answered true or false without the help of the AI. In the forced group, participants always received a prediction from the AI before making their own judgment. In the blocked and choice groups, participants had the option of receiving an AI-generated prediction. People who asked the blocked group for the prediction did not receive it, while those in the choice group did.
The research team found that participants in the comparison group had 46% accuracy when identifying true or false statements. Only 19% of people in the group accused the statements they read of being false, even though they knew 50% of the statements were false. This confirms that people tend to refrain from accusing others of lying.
In the forced group, where participants received an AI prediction whether they wanted it or not, more than a third of participants accused the statements of being false. This rate is significantly higher than in the baseline and blocked groups, which received no AI predictions.
When AI predicted a statement to be true, only 13% of participants said the statement was false. However, when AI predicted a statement to be false, over 40% of participants accused the statement of being false.
Additionally, among participants who requested and received a prediction from the AI, 84% of them embraced the prediction and made accusations when the AI said the statement was false.
“This shows that once people have such an algorithm, they will rely on it and perhaps change their behavior. If the algorithm labels something as a lie, people are ready to jump on it. This is quite alarming and shows that we have to be very careful with this technology,” says Köbis.
Interestingly, participants appear reluctant to use AI as a lie detection tool. In both the blocked and choice groups, only a third of participants requested the AI’s prediction.
The result surprised the team, because the researchers had warned participants that the algorithm could detect lies better than humans. “This could be due to a very strong effect that we have observed in various studies, where people are overconfident in their lie detection abilities, even though humans are really bad at it,” says Köbis.
AI is known to make frequent mistakes and reinforce biases. Given these findings, Köbis suggests that policymakers should reconsider using the technology on important and sensitive issues like granting asylum at borders.
“There’s so much hype around AI, and a lot of people think that these algorithms are really powerful, even objective. I’m really worried that this will lead people to rely too much on them, even if they don’t work very well,” Köbis says.
About this AI research news
Author: Christophe bench
Source: Cell Press
Contact: Kristopher Benke – Cellular Press
Picture: Image is credited to Neuroscience News
Original research: Free access.
“Lie Detection Algorithms Disrupt the Social Dynamics of Accusation Behavior” by Nils Köbis et al. iScience
Abstract
Lie detection algorithms disrupt the social dynamics of accusing behavior
Strong points
- Supervised learning algorithm outperforms human accuracy in detecting lies from texts
- Without algorithmic support, people are reluctant to accuse others of lying
- Availability of lie detection algorithm increases people’s false accusations
- 31% of participants ask for algorithmic advice, among them, most follow its advice
Summary
Humans, aware of the social costs associated with false accusations, are generally reluctant to accuse others of lying. Our study shows how lie detection algorithms disrupt this social dynamic.
We develop a supervised machine learning classifier that outperforms human accuracy and conduct a large-scale incentivized experiment manipulating the availability of this lie detection algorithm.
In the absence of algorithmic support, people are reluctant to accuse others of lying, but when the algorithm becomes available, a minority actively seeks out its prediction and systematically relies on it to make accusations.
Although those who request automatic predictions are not inherently more likely to accuse, they are more likely to follow predictions that suggest accusation than those who receive such predictions without actively seeking them.