On Sun, Mar 29, 2026 at 08:32:46AM +0000, Ihor Radchenko wrote: > [email protected] writes:
[...]
> > I think this example shows pretty well where the lie is in the
> > current wave of AI. It's not the "hallucinations", it is the
> > fact that they are wired to "talk" to us as if they knew what
> > they're doing.
>
> Partially.
> Another problem, especially for news, is that LLMs are not good at
> distinguishing between trustworthy and fake information.
> That's the reason why training data needs to be carefully selected, but
> training data is only available up to a cutoff date, so LLMs searching
> recent news can be misguided.
>
> The problem with references is not new, and RAG-based systems have been
> developed to tackle it. Nowadays, LLMs are not bad citing resources,
> when asked (deep research). Normal chats should be treated with care though.
Technically correct, yes, but the point I was making is meta-technical:
there are companies out there betting their farms (and those are pretty
BIG farms) on people adopting that stuff massively. So the stuff has
to suggest that "it knows what it is doing". And they are suceeding.
Sometimes, some collaterals [3] lie in the curb. The companies's press
releases ("we-are-oh-so-sorry... we-are-constantly-working-on...")
can be LLM generated, they're good at this.
Cheers
[3]
https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion
--
t
signature.asc
Description: PGP signature
