"Dr. Arne Babenhauserheide" <[email protected]> writes:

> Jean Louis <[email protected]> writes:
>
>> * David Masterson <[email protected]> [2026-03-09 23:41]:
>>> OpenAI has proven that LLMs have a fundamental problem -- they lie and
>>> their lying is getting more pronounced in the newer models.  The basic
>>> problem is they are trained to *not* say "I don't know" because saying
>>> that would break the foundation of their business plan.  Something to
>>> incorporate in your draft...
>>> 
>>> https://www.science.org/content/article/ai-hallucinates-because-it-s-trained-fake-answers-it-doesn-t-know
>>
>> Just that you are generalizing when saying "LLMs have a fundamental
>> problem" -- did you make a study to prove that fundamental problem?
>
> Did you read the article?
>
> »“Fixing hallucinations would kill the product,” says Wei Xing, an AI
>  researcher at the University of Sheffield.«
>
> Not everyone in the article agrees that it is a fundamentally unfixable
> problem, but this article is about a study and asks experts in the field
> what that study means for AI.

Right. As you said, if you fix the problem by having the AI say "I don't
know" and that answer comes a significant percentage of the time, then
the AI loses its value to the customer.

David Masterson


Reply via email to