❘ Published: 2023-07-27T11:40:29
❘ Updated: 2023-07-27T11:40:41
The creators of ChatGPT, OpenAI, have terminated support for their proprietary AI detector after consistently poor performance.
OpenAI, the developers behind ChatGPT, have decided to discontinue their own AI writing detector due to repeated failures.
The classifier was initially launched by OpenAI in January but its support abruptly ended on July 20, with the public only learning about it now.
OpenAI’s detector had a notorious reputation for its poor performance in detecting AI-generated content. The company openly acknowledged that it only had a 26% accuracy rate in recognizing AI content at the time of launch, and this barely improved during its short lifespan.
Article continues after ad
On the announcement page, the AI classifier’s blog post has been edited to include the following statement:
“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”
OpenAI Shuts Down Unreliable AI Detector
AI detectors have a track record of producing unreliable results. Platforms like GPTZero rely on speculation and patterns rather than actual data from the text itself.
Ongoing investigations within the AI industry are exploring the possibility of watermarking or providing additional data within the text. This would function similarly to metadata in an image, which often includes information like the camera model and location.
Article continues after ad
AI detectors have led to false accusations of cheating, with platforms like Turnitin jeopardizing the academic careers of students, including potential law school graduates.
OpenAI has not provided any specific timeline for the introduction of a new classifier but has observed a potential decrease in traffic as students take a break for the summer.
It’s clear that the reliability of AI detectors continues to be a major challenge. OpenAI’s decision to shut down their own AI writing detector highlights the need for more accurate and effective methods in identifying AI-generated content. As the AI industry progresses, it’s crucial to develop and implement mechanisms that provide users with transparency and enable them to differentiate between human-written and AI-generated content. This will not only protect individuals from false accusations but also foster trust and accountability in the digital landscape.
For the latest news and updates on AI and technology, visit GPT News Room.