OpenAI shuts AI text detection tool over ‘low rate of accuracy'
San Francisco, July 26: Microsoft-owned OpenAI has discontinued its AI text-detection tool due to a "low rate of accuracy" in distinguishing whether written material was created by a human or ChatGPT, their AI chatbot.
"We are working to incorporate feedback and are currently researching more effective provenance techniques," OpenAI said in a blogpost.
The company is working on an enhanced text version and has "made a commitment" to do the same for audio and visual content generated by its Dall-E image generator.
The text-detection tool was first released by OpenAI in January 2023, citing the need for AI systems that can detect false claims.
"We’ve trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers. While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human," the company had said at the time.
OpenAI's AI classifier tool was limited and inaccurate from the start. It required someone to manually enter at least 1,000 characters of text, which OpenAI would then classify as AI or human.
It only properly classified 26 per cent of AI-written content as "likely AI-written" and mislabeled human-written text as AI 9 per cent of the time, according to the company.
Meanwhile, OpenAI CEO Sam Altman has launched its audacious eyeball-scanning cryptocurrency startup 'Worldcoin' to help build a reliable solution for distinguishing humans from AI online, enable global democratic processes and drastically increase economic opportunity.
Worldcoin consists of a privacy-preserving digital identity (World ID) and, where laws allow, a digital currency (WLD) received simply for being human.
Users can now download World App, the first protocol-compatible wallet, and reserve their share.