[email protected] writes: > On Sat, Mar 28, 2026 at 01:13:23PM -0700, David Masterson wrote: > >> > No, generative LLMs are not "telling the truth", they are just making >> > things up which "sound plausible". For an especially jarring, recent >> > example, see [2]. >> >> Wow! > > I think this example shows pretty well where the lie is in the > current wave of AI. It's not the "hallucinations", it is the > fact that they are wired to "talk" to us as if they knew what > they're doing.
Partially. Another problem, especially for news, is that LLMs are not good at distinguishing between trustworthy and fake information. That's the reason why training data needs to be carefully selected, but training data is only available up to a cutoff date, so LLMs searching recent news can be misguided. The problem with references is not new, and RAG-based systems have been developed to tackle it. Nowadays, LLMs are not bad citing resources, when asked (deep research). Normal chats should be treated with care though. -- Ihor Radchenko // yantar92, Org mode maintainer, Learn more about Org mode at <https://orgmode.org/>. Support Org development at <https://liberapay.com/org-mode>, or support my work at <https://liberapay.com/yantar92>
