With social media and targeted advertising leading people to make impulsive decisions regarding their purchasing and electoral habits, the future is probably a little more bleak. Researchers at the University of Cambridge claim that artificial intelligence (AI) tools could soon be used to manipulate the public into making decisions they otherwise would not have made. The study introduces the concept of the “intentional economy”, a marketplace where AI can predict, understand and manipulate human intentions for profit.
Powered by large language models (LLM), AI tools like ChatGPT, Gemini, and other chatbots will “predict and drive” users based on “intentional, behavioral, and psychological data.” The study claims that this new economy will succeed the current “attention economy”, where platforms compete for users’ attention to deliver advertisements.
The research states, “Anthropomorphic AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast amounts of intimate psychological and behavioral data, often obtained through informal, conversational dialogue. “
The study cites the example of an AI model created by Meta, called Cicero, that gained human-like abilities to play the board game Diplomacy, which required participants to guess and predict opponents’ intentions. Is. Cicero’s success shows how AI has learned to “harm” partners who have already interacted for specific purposes, which can effectively translate into pushing users online toward a certain product that cannot be purchased. Advertisers want to sell it.
Selling the right to influence?
Dystopia doesn’t stop here. Research claims that this level of personalization will allow companies like Meta to auction off user intent to advertisers where they buy the right to influence decisions.
Dr Yacoub Chaudhry of Cambridge’s Leverhulme Center for the Future of Intelligence (LCFI) stressed the need to question whose interests these AI assistants serve, especially when they collect intimate conversation data.
“What people say when they have a conversation, how they say it, and what kinds of inferences can be drawn as a result in real time are much more intimate than records of online conversations,” Dr. Choudhary said.
Read this also ‘Dystopian’ AI workplace software tracking every activity has employees worried
the internet scared me
It’s safe to say that these findings have rocked the internet and have users worried about what they’re sharing with new-age chatbots.
One user said, “People are sharing a lot more personal information with AI than a regular Google search. The better it understands you, the easier it is to manipulate you,” while another said: “Now in other news , the sun rises in the east and sets in the west.”
A third commented: “This level of persuasion would be dangerous in the hands of the best government, and it is going to be dangerous in the hands of the worst.”
The study calls for urgent consideration of these implications so that users can protect themselves from becoming victims of AI’s ill intentions.