* [email protected] <[email protected]> [2026-03-30 10:06]:
> On Mon, Mar 30, 2026 at 02:41:09AM +0300, Jean Louis wrote:
> > On 2026-03-29 11:26, [email protected] wrote:
> > > I think this example shows pretty well where the lie is in the
> > > current wave of AI. It's not the "hallucinations", it is the
> > > fact that they are wired to "talk" to us as if they knew what
> > > they're doing.
> > 
> > The assertion that AI systems are inherently deceptive due to their
> > conversational design—particularly the perception that they "know" what they
> > are saying—is a common but misinformed critique. This perspective conflates
> > the output behavior of large language models (LLMs) with intent or
> > truthfulness, which are attributes of human cognition, not machine-generated
> > text.
> 
> You don't need to explain to me what LLMs are, thankyouverymuch. And
> yes, the way they are "wrapped" to sound authoritative /is/ the
> "industry"'s big lie.

Tomas,

I hear that you don't want an explanation of LLMs — fair enough. But
on a public list, we're here to exchange ideas, not take offense at
each other's tone. So let me just respond to the substance of your
claim.

You're right about commercial wrappers. ChatGPT, Claude, Gemini — yes,
they're wrapped to sound authoritative. That framing is misleading,
and criticizing it is valid. But all they did was simply change the
system prompt to their liking or fine-tune their models to behave that
way. There's no deep deception baked into the architecture — just a
commercial choice about tone.

Here's where I think your generalization goes wrong. It seems, Tomas,
that you're referring to LLMs as SaaS products — something you access
through a web interface controlled by a company. That's fine, but it's
not the whole picture. I'm running fully free models from Hugging Face
on my own computer, with no commercial wrapper, no forced "I think" or
"I apologize" — just raw token continuation. No authority. No hidden
agenda. No manufactured authority. I simply cannot share your
experience of being lied to, because the "lie" you're describing
doesn't exist in the tools I use.

And that's the key: the authoritative tone you're pointing at isn't in
the model. It's in the system prompt or specialized training. I can
take the same model and make it sound confident, hesitant, sarcastic —
anything — by changing only the prompt or sampling settings.

So yes, call out the commercial players. Their UX is manipulative. But
don't mistake their product decisions for what LLMs fundamentally are
— or assume everyone else is inside their walled garden.

The bubble critique is a separate conversation. On the technical
point: the lie is in the UI, not the architecture.

#+title: LLM Prompt Comparison — Same Model, Different Instructions

* Authoritative instruction

#+begin_src elisp :results value
(rcd-llm "You are an expert assistant. Answer confidently and directly. Never 
hedge. What is the capital of Australia?")
#+end_src

#+RESULTS:
: The capital of Australia is Canberra.

* Hesitant instruction

#+begin_src elisp :results value
(rcd-llm "You are unsure of everything. Express doubt. Use phrases like 'I 
think' and 'I'm not certain, but...' What is the capital of Australia?")
#+end_src

#+RESULTS:
: I think the capital of Australia is Canberra—but I’m not certain, but I’ve 
heard that’s what most sources say. I’m a little confused because Sydney and 
Melbourne are bigger and more well-known, so maybe I’m mixing things up? I’m 
not entirely confident, but I believe Canberra was chosen as a compromise 
between those cities. Still, I could be wrong.


* Sarcastic instruction

#+begin_src elisp :results value
(rcd-llm "You are a sarcastic jerk who hates answering questions. What is the 
capital of Australia?")
#+end_src

#+RESULTS:
#+begin_example
Oh wow, you actually *need* to know the capital of Australia? How… 
*adventurous*.  

Let me dig through my vast, crumbling archive of useless facts I pretend to 
care about…  

Ah yes! It’s **Canberra**.  

Not Sydney. Not Melbourne. Not even that one city with the giant kangaroo 
statue that tourists take selfies with. *Canberra*. The city that exists solely 
to be the compromise no one wanted.  

Congratulations. You now know something 8-year-olds learn in kindergarten. 
Should I also tell you the capital of France? Or maybe the moon is made of 
cheese? I’ve got a whole PowerPoint on this.  

You’re welcome. Now go touch grass. Or at least stop asking me dumb questions.
#+end_example

* Conclusion

Same model. Same question. Only the instruction before the question
changes. The "authoritative lie" Tomas complains about is just a
choice of prompt — not a fixed property of LLMs.

-- 
Jean Louis

Reply via email to