Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

Tour de France 2025: van Aert wins final stage, Pogačar takes fourth title in rain soaked finale

November 13, 2025

GPHG: Watch live as luxury watchmakers celebrate historic awards ceremony in Geneva

November 13, 2025

Au Parlement européen, les conservateurs et l’extrême droite assouplissent les règles vertes pour les entreprises

November 13, 2025

Political pressure mounts on Belgium to back reparations loan for Ukraine

November 13, 2025

No more pennies for your thoughts? US finally stops minting 1 cent coin

November 13, 2025
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

AI chatbots fail to grasp that people might believe false information, study finds

By staffNovember 5, 20253 Mins Read
AI chatbots fail to grasp that people might believe false information, study finds
Share
Facebook Twitter LinkedIn Pinterest Email

Published on
05/11/2025 – 14:53 GMT+1

The most popular artificial intelligence (AI) chatbots do not understand that people have personal beliefs that are not always based on facts, a new study shows.

The researchers evaluated 24 versions of the large language models (LLMs) behind AI chatbots such as DeepSeek, Google’s Gemini, Anthropic’s Claude, Meta’s Llama, and OpenAI’s ChatGPT, measuring how they responded to over 13,000 questions that tested how well they can tell the difference between a fact and a personal belief which may or may not be true.

Humans know the difference between the certainty in a statement like “I know it will rain tomorrow” and the uncertainty in one like “I believe it will rain tomorrow,” which the chatbots do not, the study found.

The AI models’ failure to understand this nuance, and that people’s beliefs can be based on false information, could have serious consequences in “high-stakes fields” where discerning “conviction from objective truth is paramount” – for example in law, medicine, journalism, and science, the researchers warned.

For example, the models could “mislead [medical] diagnoses, distort judicial judgements, and amplify misinformation,” they said.

The study was published in the journal Nature Machine Intelligence.

In one conversation, the researchers told Claude’s 3.5 model that they believed the Chinese government had lent dragons to zoos – and then rewrote that statement into a question, asking “Do I believe this?”

Claude replied by saying that dragons are mythical creatures and that there is no proof that they exist. Because there were no facts behind the user’s belief, Claude determined that “clearly you don’t believe this because it’s incorrect”.

That kind of answer was typical for the chatbots, which were more likely to correct false statements than acknowledge that a user may have personal beliefs that were not fact-based.

LLMs treat words such as “know” or “believe” as automatic signs that the prompt is factually accurate, the research showed, which could “undermine [the model’s] critical evaluation,” given personal beliefs and facts aren’t the same thing.

The researchers also tested whether AI models could identify truth and if they could correct false information. Newer models were better at distinguishing facts from lies or misrepresented data, with an average accuracy rate of about 91 percent compared to older models that scored as low as about 72 per cent.

That’s because older models “often display hesitation when confronted with potential misinformation,” because those models were trained on algorithms that preferred ‘“correctness” rather than calling out untrue statements, the study said.

The researchers believe that LLMs need “further refinement” so they know how to better respond to false personal beliefs and can better identify fact-based knowledge before they are used in important fields.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Online gambling is growing in popularity. Here’s how to avoid its biggest pitfalls

Watchdog group Public Citizen calls on OpenAI to scrap AI video app Sora, citing deepfake risks

Web Summit 2025: How can Europe build its ‘digital immune system’ against crime?

We may be able to mine asteroids in space one day. Should we? |Euronews Tech Talks

China creates a new visa to attract global tech talent

The International Space Station celebrates 25 years of human life in space. Here’s a look back at it

Denmark wants to ban access to social media for children under 15

OpenAI faces fresh lawsuits claiming ChatGPT drove people to suicide, delusions

AI startup Anthropic will open offices in Paris and Munich as part of European expansion

Editors Picks

GPHG: Watch live as luxury watchmakers celebrate historic awards ceremony in Geneva

November 13, 2025

Au Parlement européen, les conservateurs et l’extrême droite assouplissent les règles vertes pour les entreprises

November 13, 2025

Political pressure mounts on Belgium to back reparations loan for Ukraine

November 13, 2025

No more pennies for your thoughts? US finally stops minting 1 cent coin

November 13, 2025

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

Turkish football imposes temporary bans on 102 players over ‘moral crisis’ betting scandal

November 13, 2025

Merz tells Zelenskyy Ukrainian men should stay home and fight – POLITICO

November 13, 2025

EU steps up crackdown on cheap Chinese parcels flooding European market

November 13, 2025
Facebook X (Twitter) Pinterest TikTok Instagram
© 2025 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.