OpenAI developed a tool to detect and flag AI writing, you might never have to use it
OpenAI has a tool that can detect watermarks and is 99.9 percent effective. The tool has been technically ready for a year. But why hasn’t it been released yet? Read more
OpenAI’s ChatGPT can write, rewrite, summarize any text. While AI-generated content saves time for many, the tool is also notorious when it comes to students and learning. Since its launch, OpenAI’s ChatGPT has been a topic of concern. The debate about students using artificial intelligence to cheat has been widespread. But what if OpenAI says there is a way to detect text written by its artificial intelligence? OpenAI has a way to detect whether a person is using ChatGPT to write. The Wall Street JournalThe Savior tool has been ready to be released for about a year now. But it seems that OpenAI is not ready to give the green light just yet.
The report suggests that the delay in releasing this tool is only to attract and retain users. A survey conducted by the company among loyal ChatGPT users found that about a third of users would switch away from anti-cheating technology. The Center for Democracy and Technology, a technology policy nonprofit, also found that 59 percent of middle and high school teachers were convinced that some students used AI to help with schoolwork, up 17 points from the previous school year.
The Wall Street Journal The report quoted an OpenAI spokesperson as saying that the decision to keep the anti-cheating tool secret was taken because it has some risk factors and is complex. Given the complexities, this launch is likely to have an impact on the wider ecosystem beyond OpenAI.
OpenAI’s anti-fraud tool
OpenAI’s anti-cheating tool modifies the way ChatGPT selects words or word fragments (tokens) to generate text. This modification will introduce a subtle pattern, known as a watermark, into the generated text, helping to detect potential fraud or abuse.
Watermarks, while not detectable to humans, will be recognizable by OpenAI’s detection technology. This provides a score to indicate the likelihood that a document or section was generated by ChatGPT. This score will serve as an indicator of the likelihood that the content was generated by an AI model.
Internal documents show that the watermarking technology is nearly flawless, achieving a 99.9 percent effectiveness rate. But this flawlessness only holds when ChatGPT generates a sufficient amount of new text, making it possible to accurately identify AI-generated content.
Nevertheless, concerns remain that the watermark may be erased through other techniques, such as Google translating the text into another language and then back again, or ChatGPT adding emojis to the text and then manually removing them.
But according to the report, the main issue is who will be able to use it if it is released. If very few people have it, then this tool will not be useful. If many people get access to it, then the company’s watermarking technology can be misused.
But this is just for text. OpenAI has released AI detection tools for images and audio. OpenAI has focused on developing watermarking technology for audio and visual content rather than text, as AI-generated multimedia content, such as deepfakes, can potentially have more serious consequences than text-based content.