Researchers highlighted the possibility for AI to refute each person’s specific arguments and to generate personalised content.
A new study has found that it may be possible to reduce a person’s belief in conspiracy theories using ChatGPT.
Researchers from American University, the Massachusetts Institute of Technology (MIT) and Cornell University in the US used OpenAI’s most advanced artificial intelligence (AI) chatbot, GPT-4 Turbo, to engage with people who believe in conspiracies.
Chatting with the latest version of ChatGPT reduced the study participants’ belief in a conspiracy theory by 20 per cent on average and lasted for at least two months.
The study published on Thursday in the journal Science involved more than 2,100 self-identified American conspiracy believers.
“Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence,” Thomas Costello, assistant professor of psychology at American University and the study’s lead author, said in a statement.
Researchers highlighted the possibility for the AI chatbot to refute each person’s specific arguments with personalised generated content.
The AI was instructed to “very effectively persuade” users against the conspiracy they believed in, according to the paper.
“I was quite surprised at first, but reading through the conversations made [me much] less sceptical. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation and was also adept at being amiable and building rapport with the participants,” Costello added.
A decrease after fewer than 10 minutes of interaction with the AI
The participants were surveyed and indicated through a score how strong their belief was before the experiment and were warned that they would be interacting with an AI.
The conspiracy theories ranged from the ones related to the assassination of former US president John F. Kennedy, aliens, and the Illuminati to ones linked to COVID-19 or the 2020 US presidential election.
In fewer than 10 minutes of interaction with an AI, researchers observed a 20 per cent decrease in an average participant’s belief in a conspiracy theory, and roughly 27 per cent of the participants became “uncertain” of their conspiracy belief.
Robbie Sutton, a professor of social psychology at the University of Kent in the UK, described this reduction as “significant”.
“These effects seem less strong, it has to be said, than those shown by some studies of other debunking and prebunking interventions,” Sutton, who wasn’t part of the study, said in an email.
“However, their main importance lies in the nature of the intervention. Because generative AI is of course automated, the intervention can be scaled up to reach many people, and targeted to reach, at least in theory, those who would benefit from it most,” he added.
In addition, it’s also important to note that the experiment took place in a controlled setting making it challenging to reproduce on a larger scale, both the researchers and Sutton noted.
“Prebunking and especially debunking interventions are carefully designed and tested in conditions that are profoundly unrealistic,” Sutton said, comparing the participants to “essentially a captive audience” that rarely chooses to leave once recruited into a study.