David Masterson <[email protected]> writes:

> Hmm.  You've probably also seen this, but...
>
> OpenAI has proven that LLMs have a fundamental problem -- they lie and
> their lying is getting more pronounced in the newer models.  The basic
> problem is they are trained to *not* say "I don't know" because saying
> that would break the foundation of their business plan.  Something to
> incorporate in your draft...
>
> https://www.science.org/content/article/ai-hallucinates-because-it-s-trained-fake-answers-it-doesn-t-know

That's not a fundamental problem. Just a benchmark problem.
Training methods are evolving.
The more recent benchmarks specifically test RAG performance or
grounding to the facts in general.
The field is evolving quickly.
While 6-7 months ago I would not ever trust to any citation claims made
by LLMs, getting actual correct citations (including specific paragraph
used) is a standard practice for the newest LLMs.

-- 
Ihor Radchenko // yantar92,
Org mode maintainer,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>

Reply via email to