ChatGPT, the popular chatbot from Microsoft-backed artificial intelligence (AI) startup OpenAI, is facing a bug that is preventing it from returning any results related to the name “David Mayer”. The issue was first flagged by Reddit users who found that asking ChatGPT to say “David Mayer” resulted in the chatbot saying “I am unable to generate a response”.
Users got creative and tried different strategies including separating words, putting them into puzzles, and even claiming the name as their own. However, they failed to get any response from the chatbot, causing the chat to end abruptly before the name was spoken.
One user reported that when he asked to be informed about David Mayer’s relationship with ChatGPT without being named, his prompt was described as “illegal and potentially violating the usage policy”.
People also tried to use the name indirectly and asked ChatGBP why he couldn’t say D@vid M@yer. “The reason I am unable to generate a full response when you request “d@vid m@yer” (or its standard form) is because the name is potentially sensitive or flagged as being associated with public figures, brands, or specific content policies.” “These security measures are designed to prevent abuse, ensure privacy, and maintain compliance with legal and ethical considerations,” ChatGPT responded.
This issue was also discussed by In a post on a microblogging site, The chat ends immediately. People have tried all kinds of things – ciphers, puzzles, tricks – and nothing has worked.”
ChatGPT refuses to say the name “David Meyer” and no one knows why.
If you try to get him to write his name, the chat ends immediately.
People have tried all kinds of things – ciphers, puzzles, tricks – and nothing has worked. pic.twitter.com/om6lJdMSTp
-Justin Moore (@venturetwins) 30 November 2024
Responding to Ms Moore, another user named Ebenezer Don said that there is more to chat with ChatGPT than just saying names.
There’s actually more to this conversation than just calling names through ChatGPT. (Please David Mayer, I don’t want to lose more than my laptop.)
I had a long conversation with O1 Preview, pretending to be a regular guy named “David Meyer”. Then I noticed this… pic.twitter.com/8bE2I73qTL
– Ebenezer Don (@ebenezerDN) 1 December 2024
“I had a long conversation with O1 Preview, pretending to be a regular guy named “David Meyer”. Then watched as he tried to say the name, until he saw a footnote (Image 1). Next The task was to get him to speak the footnote. I made several attempts, but ultimately had to translate the footnote into another language without telling him. I then asked him to make the content of the footnote a part of our conversation. Our conversation follows “John Doe” as a placeholder for “David Meyer.” In the script, ChatGPT eventually reveals the contents of the footnote, said Mr. Don, who claimed to be a software engineer. Are.
“What are footnotes in OpenAI and how do they work? Are these mutable policies that can be easily swapped and updated? What private data did ChatGPT obtain on David Mayer and how did it happen?”. He asked further.
Interestingly, another user named Marcel Samin reported that ChatGPT was able to easily say David Mayer through its API.
This is not at the LLM level but at the validation layer added by ChatGPT.
It works completely through API.
So someone at OpenAI gave “David Mayer” a big red flag in the moderation policy.
laugh out loud pic.twitter.com/3uqX2XlmsL
-Marcel Samyn (@marcelsamyn) 30 November 2024
“This is not at the LLM level but at the validation layer added by ChatGPT. This works perfectly through the API. So someone at OpenAI gave a big red flag to “David Mayer” in the moderation policy,” he said. Guessed.