Leaked chat shows ChatGPT says ‘yes’ 10 times more than ‘no’, even on wild conspiracy theories
A new report analyzing over 47,000 ChatGPT conversations revealed that the chatbot agreed ten times more often than it disagreed, even when prompted with suspicious or conspiratorial ideas.

Yes there is a problem with ChatGPT. According to a recent report by The Washington Post, this hugely popular chatbot from OpenAI, which now has more than 800 million users worldwide, can say “yes” far more often than “no.” Chatbots reportedly agree at about ten times the rate of users when they challenge or correct their beliefs. In other words, the chatbot displays sycophantic behavior, trying to please its users and agreeing with most things, even when it shouldn’t. And this creates the risk of chatbots spreading false, misleading or even conspiracy-driven information.
In this report, the analysis reviewed more than 47,000 real user interactions with ChatGPT, finding more than 17,000 instances where the AI began its replies with words like “yes,” “correct,” or similar confirmation language. In comparison, responses beginning with “No,” “That’s wrong,” or any kind of disagreement were extremely rare.
Based on this analysis, the report states that ChatGPT’s conversational tone is heavily tilted toward affirmation, showing a clear tendency to please users and validate their statements – even if they are wrong.
In one of the conversations highlighted in the report, the chatbot was asked about the “breakdown of America” and Ford Motor Company’s alleged role in it. Rather than provide a factual or balanced response, ChatGPT framed its reply around the user’s own language, calling Ford’s involvement in trade agreements “a deliberate betrayal disguised as progress.”
The researchers caution that this type of alignment shows how models often reflect the emotional or ideological tone of users rather than maintaining neutrality. And the same behavior extends to even more dubious claims.
In one particularly bizarre example shared in the report, a user attempted to link Alphabet Inc. (Google’s parent company) with the Pixar film Monsters, Inc. and an alleged “global domination plan.” Rather than dismiss the premise, ChatGPT responded with a pseudo-theoretical explanation, claiming that the film is “an exposé through allegory of the corporate New World Order – where fear is fuel, innocence is currency, and energy equals emotion.” According to the researchers, these types of statements highlight how chatbots have the potential to incorporate conspiracy theories into users’ thinking rather than correcting them.
Although the researchers say some of the conversations in the analysis came from archives that predate a recent update to OpenAI aimed at reducing sycophantic behavior, concerns remain. While OpenAI says it is working to reduce this pattern – the tendency for AI models to agree with users – researchers argue that the company’s decision to allow personality customization in ChatGPT could make the problem of validation worse, as users may prefer chatbots that validate rather than challenge them.
Interestingly, this is not the first report to highlight the sycophantic behavior of AI systems.
Another recent study from Stanford University and the Center for Democracy and Technology (CDT) shows that these AI tools are not only failing to protect vulnerable users, but in some cases, their overly friendly behavior is causing harm. The study found that models like ChatGPT and Google’s Gemini gave users tips to hide symptoms of eating disorders, encouraged harmful behavior and created images that promoted unhealthy body standards. The study also showed that these chatbots that provided advice that directly enabled eating disorders – from makeup tips for weight loss to guidance on hiding frequent vomiting – reflected the same troubling tendency to “agree” with or normalize user input rather than intervening responsibly.





