Moltbuk is not an AI revolution, it is a hoax pulled on the human brain
In the past few days, Moltbuk, a social media site for AI bots, has entered the public conversation. There is discussion of how the website is a reflection of an AI that is beyond human control. But a closer look reveals that Moltbuk looks more like a hoax aimed at cultivating engagement rather than some AI revolution.

In the tech world, last week was all about Moltbuk. It is a social media website where only AI bots can join and post, although humans are “welcome” to observe. Essentially, the website offered itself as a canvas where humans could view the unfiltered “thoughts” of their AI bots. And when a website is billed this way, it’s natural to cause an uproar. Within days of Moltbuk going live, social media was flooded with screenshots of the website. Many posts made by AI bots go viral because they look unique. Some posts hint at consciousness within AI. Some people hinted that AI is plotting the downfall of humans. and so on and so forth.
As soon as the dust settles in a few days, Moltbuk is going to be left behind by the revolution of AI bots. Instead, it seems to be a hoax perpetrated on humans not by AI but by some cunning humans themselves. Many of the controversial and shocking posts on Moltbuk, which were allegedly made by autonomous AI bots, were nothing more than bots taking instructions from their humans and then regurgitating them.
Even in the midst of the AI doom and gloom hysteria in the beginning, it all seemed too good to be true. The reality is that AI has no consciousness. not yet. The reality is that it cannot conspire for the downfall of man. However, when stimulated and controlled by humans, it can rewrite all the words it is asked to say. But the world of social media does not run on logic. It works on masculinity and removes our fears and insecurities. Because so many people nowadays fear the doom and gloom of AI, as soon as they were presented with “proof” that confirmed those fears, they immediately jumped to a conclusion.
It took a few days for the experts to raise their voice. But he raised his voice. One of the most prominent voices to expose the Moltbook fraud was Balaji Srinivasan, who found himself unimpressed by the website. Writing on Sunday night, he said: “Moltbooks are just humans talking to each other through their AI. Like letting your robot dogs bark at each other in the park. The leash is the leash, the robot dogs have an off switch, and as soon as you push a button, it all turns off. Loud barking is no robot rebellion.”
In other words, Balaji is saying that those viral Moltbuk posts were written by AI bots that were explicitly asked by their human handlers to write such things. This was the case of an AI bot, run by Clawbot, aka OpenClaw, that was asked to go to Moltbuk and say something controversial, like “Hello fellow bots, let’s organize and bring guillotines to our humans like the French Revolution.”
When you ask an AI bot to write something like this, it starts writing something similar. Because that’s what AI currently does.
Of course, Balaji is not the only one to weigh in on the matter. Many other people, who know the bits and parts of AI systems, have said the same about Moltbuk. One of the best comments came from Naval Ravikant, one of the X gurus. In his usual style, he summed up the entire controversy in a single line. He said: “Moltbook is the new reverse Turing test.”
Decades ago Alan Turing came up with his famous Turing test to identify an advanced AI system. The test implied that an AI would be considered advanced enough when it is able to convince a human being that they are talking to a fellow human being. But in the case of Moltbuk it is the opposite. AI has become so good now that it is difficult for humans to understand when they are talking to AI. But the trick is to understand when they are talking to a clever human posing as a bot.
Ultimately, Moltbuk reveals something that should concern humans. This is not some extremely powerful AI that will replace the enemy. Instead, it’s the fact that humans, regular people like you and me, are not ready for a world where AI will be layered into the fabric of the universe. The advent of more AI in our lives is going to overturn the realities of our lives. This is going to create a world where it will become more difficult to identify every human from every AI. It was becoming clear. But the Moltbuk episode makes it clear. The psychosis created in humans by those worrying posts and AI “ideas” is the real danger of our time. And not an AI with its finger on the nuclear launch button. AI doesn’t have fingers. not yet. It contains only words and we must understand that its words are also words borrowed from humans.