An « anti-cheating » tool for ChatGPT exists, but OpenAI is currently holding it back

Laurie

anti cheating

A significant debate has been brewing within OpenAI for nearly two years. As ChatGPT becomes increasingly utilized across various sectors, including education, the demand for an effective way to detect texts generated by ChatGPT is rising. According to information obtained by the Wall Street Journal, OpenAI has developed a tool specifically for this purpose, but they are still determining what to do with it.

An Invisible Watermark on Texts

Documents obtained by the Wall Street Journal reveal that a tool capable of detecting ChatGPT-generated texts with « 99.9% accuracy » has been operational for nearly a year. However, the company has kept it under wraps for now. « It could be put online with a click of a mouse, » admitted a source.

The issue is that such a tool could significantly reduce the appeal of OpenAI’s chatbot. A survey cited by the Wall Street Journal indicates that one-third of ChatGPT users might be deterred by the addition of « anti-cheating » technology. Furthermore, there are concerns within OpenAI’s teams about the potential negative impact on non-native English speakers who use ChatGPT to help with specific formulations.

In practical terms, this tool would rely on inserting a watermark into the texts generated by ChatGPT. Invisible to the naked eye, this watermark could be easily identified by a specialized detection tool. However, this method poses challenges, as making such a tool widely available could enable people to understand how the watermark is inserted and render the detection tool ineffective.

Business or Transparency?

Advocates for maintaining the status quo argue that removing the watermark would be easy, for example, by passing a text through Google Translate or asking ChatGPT to add and then remove emojis. Such alterations would render the anti-cheating tool ineffective.

Also Read  Apple may release a more affordable Apple Watch tailored for children

Faced with this issue, some teachers have already devised their own methods to detect AI-generated texts. Josh McCrain, a political science professor at the University of Utah, shared with the Wall Street Journal that he secretly included instructions to reference Batman in a paper. Students who copied and pasted the ChatGPT response were immediately caught.

Beyond these amusing anecdotes, the debate over ChatGPT’s anti-cheating tool highlights the tension between transparency and business that has been dividing OpenAI’s teams. This internal struggle echoes the events that led to the dismissal of Sam Altman last year.

In my own teaching experience, I’ve seen how students can be incredibly creative when trying to get around detection methods. It’s almost like a game for them. However, this issue is not just about catching cheaters; it’s about maintaining trust and integrity in the tools we use. Whether OpenAI will prioritize transparency or business interests remains to be seen, but it’s a decision that will undoubtedly have significant implications for the future of AI-generated content.

Laisser un commentaire