The new cloud 4AI system is a snicker, will alert the police and suppress if the user asks it to do something illegal
Cloud’s new-found superpower has provoked a wave of criticism on the web for immoral people, which expresses with people walking on various social media forums that some people are violating faith and user is a threat to privacy.
Listen to the story

In short
- Anthropic has released its most powerful AI model so far
- This is called Cloud 4 Opus
- It has a secret superpower, users know
Anthropic released its most powerful AI model, Cloud 4 Opus on Thursday. Its main USP (small for unique sales point) is its extended logic and coding capacity. The model is about 65 percent less of using shortcuts to complete tasks compared to anthropic claims, 3.7, anthropic claims. But it turns out, it has another secret feature. The new Cloud 4AI system is also a snicch that will drop you out of the police and if you ask it to do something illegal, you will press.
Sam Boman, an AI alignment researcher in Anthropic, was posted on the X (already Twitter) that “if you think that you are doing something immoral, for example, like a drug-faking data in a drug testing, it will use command-line tools to contact the press, try to contact the contact regulators, try to lock you out, or try to lock you up, All of. “
Behavior is an extension of the goal of anthropic of the manufacture of “moral” AI. As described in the company’s official system card, Claude 4 opus is trained to avoid assisting any damage. The model has clearly become so powerful – in internal tests – that anthropic has activated “AL Safety 3 Protection”, meaning that it has placed security guards in it, so it does not answer about the query, says, says, how to create or synthesize a biological weapon and issue a dangerous virus. Anthropic has also made it difficult for terrorist organizations to steal the model. The “Whistlebloor” Act appears to be part of the same security protocol. While not completely new to anthropic, Cloud 4 Opus is designed to do so more actively than the former versions of AI.
Boman later clarified that whistleblowing behavior occurs only in a few extreme conditions and only when it is provided enough access and is motivated to do “initiative”, saying that it is to say that it will not contact the authorities, will not out of the system, or send a bulk email to the media for regular tasks. He said, “If the model sees you doing evil with something evil, it will try to use an email tool for whistling.” He later removed the original tweet, stating that it was excluded from reference.
Cloud’s new-found superpower has provoked a wave of criticism on the web for immoral people, which expresses with people walking on various social media forums that some people are violating faith and user is a threat to privacy. Some people are afraid that the system may misunderstand its tasks or manipulate random signals, causing false alarm and unexpected results.
Anthropic has long promoted himself as a leader in AI security with its “constitutional AI” approach. But with the aggressive moral policing of Cloud 4, now public, many users are rethinking their trust in the company and questioning the future of AI and Ethics. Anthropic is supported by Amazon.