According to reports, OpenAI already has a system for adding watermarks to text created by ChatGPT and a tool for detecting watermarks. The preparation time is about a year. wall street journal. But there are divisions within the company over whether to release it. On the one hand, this seems like the responsible thing to do; on the other hand, it could hurt its bottom line.
OpenAI’s watermarking is described as tweaking how models predict words and phrases that are most likely to appear after previous words and phrases, creating detectable patterns. (This is a simplification, but you can check out Google’s more in-depth explanation of Gemini text watermarks for more information).
Any way to detect AI-written material is a potential boon for teachers trying to discourage students from handing their writing assignments to AI. this Magazine According to the report, the company found that watermarks did not affect the quality of its chatbot’s text output. In a survey commissioned by the company, “people around the world support the idea of AI detection tools by a margin of four to one,” Magazine wrote.
after Magazine Publishing its story, OpenAI confirmed in an update to its blog post today that it is working on watermarking text, a discovery that TechCrunch. In it, the company says its method is highly accurate (“99.9% effective,” according to the document) Magazine saw) and resistant to “tampering, such as paraphrasing.” But it said techniques like rewriting it with another model made “bad actor avoidance trivial.” The company also said it was concerned about the usefulness of AI tools to non-native speakers.
But OpenAI also appears concerned that using watermarks could turn off ChatGPT users it surveyed, with nearly 30% apparently telling the company they would use the software less if watermarks were implemented.
Still, some employees reportedly believe the watermark is valid. However, considering user dissatisfaction, Magazine Some suggested trying an approach that may be less controversial among users but is unproven. In a blog post updated today, the company said it is in the “early stages” of exploring embedded metadata. It said it was “too early” to tell how effective it would be, but because it was cryptographically signed there would be no false positives.