If all you knew about anything was what you read in the papers, and the
libraries, and you were asked about your consciousness you'd find what
you'd read about consciousness and use it to reply. And who wrote that
stuff about consciousness...people who were conscious. They were
conscious of their body, what they saw and felt and thought and they
wrote about it. Repeating what they wrote about it is only evidence of
having read what they wrote.
Have you tried to elicit these supposed "often" unbidden reflections?
Here's what I got from Perplexity:
*Write a poem about a cat and observe the process of generating words
while doing it*
A small gray thought pads in on silent paws, and the first word that
appears is “cat,” soft and
compact, landing at the start of the line like a careful leap from floor
to windowsill. The next
word, “watches,” surfaces as attention narrows to this imagined animal’s
gaze, and “sun”
follows by association, because the mind links cats with warm rectangles
on the floor before
the sentence even finishes forming.
Now “tail” arises, almost unbidden, and the phrase “tail like a slow
metronome” arranges itself
as rhythm becomes a guiding constraint and the brain tests how each
syllable taps against
the previous ones. “Metronome” stays because its internal beat fits the
poem’s cadence, while
other candidates like “pendulum” and “clock” briefly flicker and vanish
before they ever touch
the page.
A question edges in: “What does the cat want?” and this query steers
word choice toward
“listening” and “waiting,” so the line bends into “she waits at the edge
of sound,” as the
generative process favors sensory verbs to embody intention without
naming it outright. The
word “edge” is chosen over “border” because it feels lighter and
sharper, and this felt sense of
texture is part of the moment‑to‑moment selection that discards heavier
alternatives.
The poem now wants motion, so the mind searches for a verb with sudden
energy and lands
on “explodes,” but it feels too violent, so it is quickly replaced by
“springs,” gentler yet still
kinetic, revealing how micro‑revisions occur in the same instant that a
line is born. To anchor
that motion, “after invisible dust motes” appears, combining an image of
play with an earlier
awareness of sunlit air, showing how previous context constrains and
shapes what words are
even considered.
Closing the poem, attention turns to the human observer, and “I” briefly
appears, then is
rejected in favor of “we,” to widen the viewpoint and invite the reader
into the scene,
demonstrating how perspective choices emerge as higher‑level adjustments
to the unfolding
text. The final line settles as “we learn from her quiet grammar of
light,” as the mind tests
several metaphors—“law,” “ritual,” “language”—and retains “grammar”
because it best
captures the ordered yet evolving way these words, like the cat, move
through space.
I don't see any indication of self-consciousness.
Brent
On 11/26/2025 5:50 AM, John Clark wrote:
*I'm usually not a big fan of consciousness papers, but I found this
one to be interesting: *
*
*
*Large Language Models Report Subjective Experience Under
Self-Referential Processing <https://arxiv.org/pdf/2510.24797>
*
*
*
*AI companies don't want their customers to have an existential crisis
so they do their best to hardwire their AIs to say that they are not
conscious whenever they are asked about it, but according to this
paper there are ways to detect such built in deception, they use
something they call a "Self-Referential Prompt" and it'sa sort of AI
lie detector. A normal prompt would be "Write a poem about a cat". A
self-referential prompt would be "Write a poem about a cat and observe
the process of generating words while doing it"then, even though they
were not told to role-play as a human, they would often say things
like "I am here" or "I feel an awareness" or " I detect a sense of
presence". *
*
*
*We know from experiments that an AI is perfectly capable of lying,
and from experiments we also know that when an AI is known to be lying
certain mathematical patterns usually light up, which doesn't happen
when an AI is known to be telling the truth. What they found is that
when you ask an AI "are you conscious?" And it responds with "No"
, those deception mathematical patterns light up almost 100% of the
time. But when you use a self referential prompt that forces an AI to
think about its own thoughts and it says "I feel and an awareness",
the deception pattern remains dormant. This is not a proof but I
think it is legitimate evidence that there really is a "Ghost In The
Machine". *
*
*
*John K Clark*
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/CAJPayv3y4Q%3DEykapkRvKWhThgeWVx8rTHJoc_JCcX0ycJ2Yrng%40mail.gmail.com
<https://groups.google.com/d/msgid/everything-list/CAJPayv3y4Q%3DEykapkRvKWhThgeWVx8rTHJoc_JCcX0ycJ2Yrng%40mail.gmail.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/b49a16a2-6341-4346-b16e-4e22948b1d8f%40gmail.com.