OpenAI has developed a tool to detect texts generated by artificial intelligence with 99.9% accuracy. Although the tool is ready to be launched, the company has decided to postpone its release. The decision has been influenced by extensive internal discussions that have dragged on for nearly two years.
The project, which includes a method of watermarking AI-generated text, has generated tensions within OpenAI. According to sources cited by The Wall Street Journal, some employees argue that the tool is vital to prevent the misuse of AI in academic settings. In contrast, others fear its implementation could alienate a significant portion of its users.
An internal survey revealed that approximately one-third of ChatGPT's loyal users would be uncomfortable with the inclusion of this anti-cheating technology. This reaction has created a dilemma for OpenAI, which is torn between its commitment to transparency and the need to maintain its user base. According to internal documents, the technology could disproportionately affect certain groups, such as non-native English speakers.
OpenAI has expressed concern that the tool could be vulnerable to simple circumvention methods, such as machine translation or adding random characters. This has led the company to carefully consider the risks before proceeding with the launch. In addition, OpenAI has prioritized the development of audio and visual markup technologies because of their more significant impact, especially in a U.S. election year.
Despite the challenges, the tool's potential to address the problem of plagiarism and AI misuse in education is significant. According to the Wall Street Journal, teachers and educational technology experts urgently need tools that help identify AI-generated texts. However, OpenAI's fear of potential negative repercussions remains an obstacle to its release.
OpenAI has received support from several employees and developers who strongly believe in the tool's benefits, especially in education. However, the company has opted to continue evaluating alternatives that may be less controversial to its user base. According to The Wall Street Journal, OpenAI's senior executives are deliberating on finding a balance between technological innovation and market acceptance.
As OpenAI continues to assess the situation, the launch of the anti-cheating tool remains on hold. In the meantime, the company faces the challenge of deciding how and when to deploy this technology, aware that the decision could have significant implications for its reputation and the future of AI-assisted education.