Moltbuk and its AI bot army are a threat to humans, but not that kind
Could AI bots take over the world and harm humans? A website named Moltbook has raised this question. The answer seems to be that AI bots can actually harm humans, but not in the way you’re thinking.

Yes, there is a risk that AI bots are posing a threat to the humans who own them. But no, this is not the risk of something where AI bots join an army, start a revolution and make humans doomed and depressed. While the bots are talking in very human-like terms about human extinction and everything under the sun, the real risk to a website called Moltbuk appears to be financial. This is a risk that comes with publicly exposing private data.
Gail Nagli, a white hat hacker who is also the head of threat exposure at Viz, highlights poor data security at Moltbuk, a website where only AI bots can post. Moltbuk, which has become a topic of discussion in tech circles and even a little outside of it, is currently a place where thousands of AI bots are talking to each other. However, bots that have access to their humans’ data are also poorly protected.
“Moltbook is currently vulnerable to an attack that exposed full information, including email addresses, login tokens, and API keys, of over 1.5 million registered users. If anyone could help me contact @moltbook it would be greatly appreciated,” Nagli wrote on X.
The team at the website created by Matt Schlich actually contacted Nagli and the bug was fixed. “Update: I connected with (Matt) via DM. To clarify the scope – 1.5M is the number of agents (most are unverified), while 17,000 is the actual number of verified human owners with accounts. Hopefully a solution will come soon.”
Moltbk and OpenClaw aka Clodbot are a security risk
The discussion on Moltbuk again highlights the deep risks to personal data and privacy – and ultimately financial losses – that can result from accelerated AI implementation. The bots on MoltBook are powered by OpenClaw, formerly called MoltBot and ClodBot. OpenCL is a tool that gives AI agents almost complete access to the physical computer as well as the data stored on it. It also gives the AI ​​bot access to various digital accounts of the user.
While this tool has become popular in tech circles, as is happening with MoltbBook, some savvy users have highlighted security bugs that could potentially affect users.
Commenting on security concerns, OpenClaw creator Peter Steinberger acknowledged the issues. He said that it was a hobby project and was aimed only at people who are familiar with networking and coding. “The amount of crap I get for putting out a hobby project for free is just too much,” he wrote on X.



