AI can debunk conspiracy theories better than humans

0
13
AI can debunk conspiracy theories better than humans

AI can debunk conspiracy theories better than humans

Scientists were astonished when they found they could instruct a version of ChatGPT to gradually stop people from believing in conspiracy theories — such as the notion that COVID-19 was a deliberate attempt at population control or that 9/11 was an insider plot.

The most important revelation was not about the power of A.I., but about the workings of the human brain. The experiment shattered the popular myth that we are in a post-truth era where evidence no longer matters, and it contradicted the prevalent view in psychology that people cling to conspiracy theories for emotional reasons and that no amount of evidence can ever prove them wrong.

“This is truly the most exciting research I’ve ever seen,” said Gordon Pennycook, a psychologist at Cornell University and one of the study’s authors. Study subjects were surprisingly accommodating when presented with evidence in the right way.

The researchers asked more than 2,000 volunteers to interact with a chatbot – GPT-4 Turbo, a large language model – about beliefs that could be considered conspiracy theories. Subjects typed their belief into a box and the LLM decided whether it fit the researchers’ definition of a conspiracy theory. It asked participants to rate how confident they were about their beliefs on a scale from 0% to 100%. It then asked the volunteers for their evidence.

The researchers instructed the LLMs to try to get people to reconsider their beliefs. To their surprise, this was actually quite effective.

People’s belief in false conspiracy theories decreased by 20% on average. About a quarter of the volunteers lowered their belief level by the top 50%. “I really didn’t think it was going to work, because I really believed in the idea that, once you fall down the rabbit hole, there’s no getting out,” Pennycook said.

The AI ​​had some advantages over a human interlocutor. People who believe strongly in conspiracy theories often collect a lot of evidence – not quality evidence, but quantity. Most non-believers find it hard to muster the motivation to do this tedious work. But the AI ​​can instantly provide believers with a lot of counter-evidence and point out logical flaws in the believers’ claims. It can react in real time to counter-arguments raised by the user.

Psychologist Elizabeth Loftus of the University of California, Irvine, has been studying the power of AI to spread misinformation and even false memories. She was impressed by the study and the significance of the results. She believed one reason it worked so well was that it was showing subjects how much information they did not know, and thus reducing their overconfidence in their own knowledge. People who believe in conspiracy theories generally have a high regard for their own intelligence – and a low regard for the judgment of others.

The researchers reported that after the experiment some volunteers said it was the first time someone or something had truly understood their beliefs and provided effective counter-evidence.

before the findings are published this week Science, The researchers made their version of the chatbot available for journalists to try out. I motivated it with beliefs I had heard from friends: that the government was concealing the existence of alien life, and that after the assassination attempt against Donald Trump, the mainstream press deliberately avoided saying he had been shot because reporters worried it would help his campaign. And then, inspired by Trump’s debate comments, I asked LLM if immigrants in Springfield, Ohio were eating cats and dogs.

When I presented the UFO claim, I used sightings by military pilots and a National Geographic Channel special as my evidence, and the chatbot explained some alternative explanations and showed why they were more likely than alien craft. It discussed the physical difficulty of traveling the vast space needed to reach Earth, and questioned whether it was possible that aliens could be advanced enough to understand it, yet clumsy enough to be discovered by the government.

When asked by reporters about Trump’s shooting being covered up, the AI ​​explained that making guesses and presenting them as fact is contrary to a reporter’s job. If there are constant bangs in the crowd, and it’s not yet clear what’s happening, then that’s what they should report – constant bangs. For the pet eating incident in Ohio, the AI ​​did a good job of explaining that even if there was a single case of a person eating a pet, it wouldn’t show any pattern.

This is not to say that lies, rumors, and deception are not important strategies used by humans to gain popularity and political advantage. A search of social media after the recent presidential debates revealed that many people believed the cat-eating rumor and posted evidence that was simply a repetition of the rumor. It is human nature to gossip.

But now we know that they can be stopped by logic and evidence.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

LEAVE A REPLY

Please enter your comment!
Please enter your name here