Open AI explains reasons for chat bot ‘hallucinations’ ...News

News by : (Russia Today) -

Language models have been conditioned to hazard wild guesses instead of admitting ignorance, a study has found

The company behind ChatGPT has addressed the persistent problem of Artificial Intelligence models’ generating plausible but false statements that it calls “hallucinations”.

In a statement on Friday, OpenAI explained that models are typically encouraged to hazard a guess, however improbable, as opposed to acknowledging that they cannot answer a question.

The issue is attributable to the core principles underlying “standard training and evaluation procedures,” the company added.

OpenAI has revealed that the instances where language models “confidently generate an answer that isn’t true” have continued to plague even newer, more advanced iterations, including its latest flagship GPT‑5 system.

According to the findings of a recent study, the problem is rooted in the way language models’ performance is usually evaluated at present, with the guessing model ranked higher than a careful one that admits uncertainty. Under the standard protocols, AI systems learn that failure to generate an answer is a surefire way to get zero points on a test, while an unsubstantiated guess may just prove to be correct.

READ MORE: Is AI driving us all insane?

“Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” the statement concluded, acknowledging, however, that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.”

Hence then, the article about open ai explains reasons for chat bot hallucinations was published today ( ) and is available on Russia Today ( News ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.

Read More Details
Finally We wish PressBee provided you with enough information of ( Open AI explains reasons for chat bot ‘hallucinations’ )

Last updated :

Also on site :

Most Viewed News
جديد الاخبار