Sam Altman Refutes December OpenAI Model Release Claims

OpenAI’s Whisper Tool: Researchers Claim It Generates False Info

According to a recent report from ABC News (as mentioned by Engadget), OpenAI's audio transcription tool, Whisper, has a tendency to create transcriptions that are not actually present in the original audio recordings.

Concerns in Various Industries

This raises serious concerns since Whisper is being utilized across multiple sectors, including healthcare facilities that depend on the tool to document consultations. This is happening even though OpenAI has made it clear that the tool should not be employed in "high-risk domains."

Issues Found in Transcriptions

A machine learning engineer identified hallucinations in about 50% of over 100 hours of transcriptions. In addition, another developer reported finding them in all 26,000 transcriptions he reviewed. Researchers express that this situation could result in inaccurate transcriptions in millions of recordings globally. An OpenAI representative informed ABC News that the company is looking into these claims and will take the feedback into account for future model updates. The tool is used within Oracle and Microsoft Cloud, which serve thousands of clients around the world, amplifying the potential for risk.

Harmful Results from Research

Professors Allison Koenecke and Mona Sloane analyzed thousands of short segments from TalkBank and discovered that 40% of the hallucinations they found were harmful. For instance, in one recording, a speaker mentioned, "He, the boy, was going to, I'm not sure exactly, take the umbrella." However, the tool transcribed this as, "He took a big piece of the cross, a teeny, small piece...I'm sure he didn't have a terror knife so he killed a number of people."

Engadget, ABC News'

Leave a Comment

Scroll to Top