AI is more inspiring than humans, can change its political views in minutes: Report
Imagine having a casual chat with an AI chatbot and going away with a completely different opinion on a political issue that you felt firmly about ten minutes ago.

In short
- New research shows that major AI models are becoming highly effective in persuasion
- In some cases, these AI models were even more confident than humans.
- While it cannot be too big, in the context of affecting public opinion, it is enough
Imagine having a casual chat with an AI chatbot and going away with a completely different opinion on a political issue that you felt firmly about ten minutes ago. A science fiction looks like a film, but it is already happening. New research suggests that major AI models are becoming highly effective in persuasion and in some cases, even more confident than humans. They are not only sharing the facts, but can also stitch reactions to the individual, using tone, evidence and privatization in this way.
According to a report by Financial expressStudies conducted by the UK’s AI Safety Institute in collaboration with universities including Oxford and MIT found that AI models such as OpenEI’s GPT -4, GPT -4.5, GPT -4,, Meta’s Lama 3, Zai’s Grock 3, and Alibaba’s Cwwen can be influenced by political views in a less than ten minutes. What is more, changes in opinion were not fleeting. A significant part of the participants maintained their new ideas even after a month.
Researchers did not rely on AI’s default behavior alone. He corrected these models using thousands of conversations on divisive subjects such as healthcare funding and refuge. By rewarding the output that match the desired motivational style and by adding personal touch – such as the age of the user, political inclination, or referring to the previous opinion – AI became even more convinced. In fact, its perseverance increased by about five percent in personalization compared to general reactions.
Although this may not be very large, in the context of affecting public opinion, it is sufficient. Political campaigns spend millions chasing one percent swing in voter spirit. The ability to achieve that change in minutes is both impressive and worrying. I think this is the place where the real debate begins; This is a thing to sell a new smartphone for AI, but for this quite a government’s policy for the other’s policy for it.
The study also states that AI is not limited to persuasion politics. The first research of MIT and Cornell showed that these models can reduce faith by attaching individual, evidence-based conversations to the principles of conspiracy, denial of climate change, and vaccine doubt. While it seems like a positive use case, it confirms the fact that the same skill can be applied in low moral ways, such as spreading misinformation or promoting harmful ideologies.
Interestingly, the motivational power of AI also extends to commercial places. As cornell’s David Rand reported, chatbots can significantly affect brand perceptions and purchasing decisions. In search of integrating advertisements and shopping features in AI assistants with technical companies such as Openai and Google, this ability can become an attractive, yet morally gray, revenue stream.
In my view, the real challenge is not how AI can now be motivational, but how much it can be confident with the next generation model. Regulation and safety measures will be important, but therefore there will be public awareness. If you know that the friendly chatbot you are talking to can run your opinion with subtlety, then you can think twice before taking its words at the inscribed price. The technology is powerful – perhaps very powerful – and the world needs to walk carefully.