On Wed, Nov 26, 2025 at 7:37 PM Brent Meeker <[email protected]> wrote:

* > If all you knew about anything was what you read in the papers, and the
> libraries,*
>

*And all the videos on YouTube, and everything else on the Internet.  *



> *> and you were asked about your consciousness you'd find what you'd read
> about consciousness and use it to reply. *
>

*So would a human being, although a human's knowledge base would be far far
smaller.  *

* > And who wrote that stuff about consciousness...people who were
> conscious.*
>

*You take that as a given, but why? Because it took intelligence to write
that stuff about consciousness and you implicitly assume that intelligence
implies consciousness. That's why you don't believe in solipsism, that's
why you believe your fellow human beings are conscious, except when they're
sleeping, or under anesthesia, or dead, because when they are in those
states they are not behaving intelligently.  *

*> I don't see any indication of self-consciousness.*


*What exactly would an AI need to say for you to think there were
indications of self-consciousness? Do you see any indications of
self-consciousness in this email that I have written?  **Do you see any
indications that I am not an AI?*

*John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*

na1



i
>
> On 11/26/2025 5:50 AM, John Clark wrote:
>
> *I'm usually not a big fan of consciousness papers, but I found this one
> to be interesting: *
>
>
> *Large Language Models Report Subjective Experience Under Self-Referential
> Processing <https://arxiv.org/pdf/2510.24797> *
>
> *AI companies don't want their customers to have an existential crisis so
> they do their best to hardwire their AIs to say that they are not conscious
> whenever they are asked about it, but according to this paper there are
> ways to detect such built in deception, they use something they call a
> "Self-Referential Prompt" and it's a sort of AI lie detector. A normal
> prompt would be "Write a poem about a cat".  A self-referential prompt
> would be "Write a poem about a cat and observe the process of generating
> words while doing it" then, even though they were not told to role-play as
> a human, they would often say things like  "I am here" or "I feel an
> awareness" or " I detect a sense of presence". *
>
> *We know from experiments that an AI is perfectly capable of lying, and
> from experiments we also know that when an AI is known to be lying certain
> mathematical patterns usually light up, which doesn't happen when an AI is
> known to be telling the truth.  What they found is that when you ask an AI
> "are you conscious?" And it responds with "No" , those deception
> mathematical patterns light up almost 100% of the time. But when you use a
> self referential prompt that forces an AI to think about its own thoughts
> and it says "I feel and an awareness", the deception pattern remains
> dormant.  This is not a proof but I think it is legitimate evidence that
> there really is a "Ghost In The Machine". *
>
> *John K Clark*
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2VS5GbrfcGpN%2BC6nGGocv%2BthKtpWVcGhzYAf%2BepVkJrQ%40mail.gmail.com.

Reply via email to