Chatgpt 5 said that I do not know after thinking 34 seconds, Elon Musk called it an impressive response, why is it here
Does Chatgpt have all the answers? Yes and no. But in the past, it will cook anything in the name of a reaction, even at the cost of factual accuracy, a new event is expected to change for better AI. When it knows, it knows. When it does not, it can tell you, okay, it does not know. It is impressive and even Alon Musk agrees.

In short
- Chatgpt 5 believes that it cannot respond to a user query
- Elon Musk leaves a personal quarrel on one side, the model of louds openi
- The response suggests that AI can be fixed hallucinations
Elon Musk is headed with Openi’s chat in the AI race with her own Groke Chatbot. While the CEO of XAI has consistently argued that Grok 4 is more powerful than the Chatgpt 5, a reaction has affected it.
On X, a user named Kol Tregaskes shared a screenshot with his conversation with the thinking model of Chatgpt 5. Openai Chatbot left after thinking for 34 seconds, “I don’t know – and I can’t find out firmly.”
Musk responded to the post. Tech billionaire admitted that the response was “impressive”.
Large language models (LLMs) like the chatter face a major challenge when it comes to hallucinations. Often, AI will generate misinformation from confidence as a user’s response.
In this case, the response of Chatgpt 5 is a major jump. As chatbott decided to accept that it could not provide information rather than providing wrong details to give just a response.
This not only prevents any proliferation of misinformation, but also creates more confidence from the user’s point of view. In this way, a user will know that the model will only provide a reaction when the information will be assured.
Since the original release of Chatgpt in November 2022, Openai has worked on preventing the possibility of hallucinations. With Chatgpt 5, 10 percent of such incorrect reactions have a chance.
When it comes to LLM development, such steps are necessary. The aim of AI companies is to reach the Artificial General Intelligence (AGI) with its model. Agi refers to an AI model that has human intelligence and understanding. However, for now, this is a theoretical concept.
Openai doesn’t want chatgpt to be your primary source
Openai is aware of the potential shortage when it also comes to hallucinations with Chatgpt 5. The company wants to stop users from using the model as their primary source despite constant progress.
Chatgpt chief Nick Turley said that users should still verify the information obtained from the chatbot. He told The Verge, “Until I don’t think we are more reliable than a human expert on all domains, not only a few domains, I think we are going to continue to advise you to double your answer.” Turley urged the use of chatgip as a way to get a second opinion.
Since its release on 7 August, CHATGPT 5 has increased the number of active users. The company has launched an affordable premium plan for India, which is priced at Rs 399 per month.




