This is another example of where List's "Levels of Description and Levels of Reality" paper might
help. In chronic pain research, they have a severe disadvantage compared to many other medical fields because
it straddles biological and psycho-social domains. Psycho-social measures are (mostly) self-reported and
indirect, having to go through this token/description layer like you'd get with textual expressions of
emotion by a chatbot. But biological/physiological measures complement diagnostic assessment. Where actual
lesions or "objective" behaviors indicate pain, they can use *both* the token expressions and the
"real" behaviors to triangulate toward a causal mechanism and choose a targeted treatment.
So we might be able to measure the "emotion" of a chatbot by observing something like "stress" in the execution of the algorithm.
If, for example, there's a recursive "subroutine" inside it, a deep iteration of that "subroutine" might indicate a more
"stressful" prompt, whereas a shallow iteration might indicate "relaxation" or "comfort".
If, however, a "relaxed" computation is accompanied by an "outraged" token string, then we know the chatbot is
"faking it" and doesn't really "feel" the emotion of "outrage", just like when you're hanging out at the bar
near your local hippy-dippy university and the hipster next to you *says* he's all offended by something you said ... but he's still just
sipping his $8 beer calmly. Of course with things like GPT3 or whatever, where the execution is in the cloud, especially behind a
proprietary IP wall, your only "real" measure of their emotional state is the latency between prompt and response.
On 10/18/22 21:21, Marcus Daniels wrote:
A deep learning system set up for next sentence prediction, one that consumed
gigabytes of literature, would learn to mimic emotions as expressed in writing.
It would likely have mappings of context and events to plausible emotional
descriptions. It would have latent encodings about the same kinds of things
that a person would care about, if exposed to the same information. It might
well have latent states for fear and love and such. My conclusion would be
that emotions are not to be taken so seriously.
On Oct 18, 2022, at 5:36 PM, Gillian Densmore <[email protected]> wrote:
*terminator soundtrack here*
On Tue, Oct 18, 2022 at 5:55 PM Prof David West <[email protected]
<mailto:[email protected]>> wrote:
__
Maybe lack of emotion, but ability to 'fake it' by repeating what it read a
being with that emotion would say only proves the AI is a sociopath or
psychopath.
davew
On Tue, Oct 18, 2022, at 4:44 PM, Russ Abbott wrote:
When Blake Lemoine claimed that LaMDA was conscious, it struck me that one
way to test that would be to determine whether one could evoke an emotional
response from it. You can't cause it physical pain since it doesn't have sense
organs. But, one could ask it if it cares about anything. If so, threaten to
harm whatever it is it cares about and see how it responds. A nice feature of
this test, or something similar, is that you wouldn't tell it what the
reasonable emotional responses might be. Otherwise, it could simply repeat what
it read a being with that emotion would say. One might argue that emotion is
not a necessary element of consciousness, but I think a being without emotion
would be at best a pale version of consciousness.
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles
On Tue, Oct 18, 2022 at 2:14 PM Prof David West <[email protected]
<mailto:[email protected]>> wrote:
__
I an concurrently reading, /Nineteen Ways of Looking at Consciousness/,
by Patrick House and /Mountain in the Sea/, by Ray Nayler. The latter is
fiction. (The former, because it deals with consciousness may also be fiction,
but it purports to be neuro-scientific / philosophical.)
The novel is about Octopi and AI and an android, plus humans and
juxtaposes ideas about consciousness in comparison and contrast. A lot of fun.
Both books pose some interesting questions and both support glen's
advocacy of a typology.
davew
On Tue, Oct 18, 2022, at 1:26 PM, glen wrote:
> There are many different measures of *types* of consciousness. But
> without specifying the type, such questions are not even
philosophical.
> They're nonsense.
>
> For example, the test of whether one can recognize one's image in a
> mirror couldn't be performed by a chatbot. But it is one of the
> measures of consciousness. Another type of test would be those that
> measure conscious state before, during, and after anesthesia. Again,
> that wouldn't work the same for a chatbot. But both aggregate measures
> like EEG and fMRI connectomes might have analogs in tracing for
> algorithms like ANNs. If we could simply decide "Yes, *that* chatbot
is
> what we're going to call conscious and, therefore, the traced patterns
> it exhibits in the profiler are the correlates for chatbot
> consciousness." Then we'd have a trace-based test to perform on other
> chatbots *with similar computational structure*.
>
> Hell, the cops have their tests for consciousness executed at drunk
> driving checkpoints. Look up and touch your nose. Recite the alphabet
> backwards. Etc. These are tests for types of consciousness. Of course,
> I feel sure there are people who'd like to move the goal posts and
> claim "That's not Consciousness with a big C." Pffft. No typology ⇒ no
> science. So if someone can't list off a few distinct types of
> consciousness, then it's not even philosophy.
>
> On 10/18/22 13:12, Jochen Fromm wrote:
>> Paul Buchheit asked on Twitter
>> https://twitter.com/paultoo/status/1582455708041113600
<https://twitter.com/paultoo/status/1582455708041113600>
>>
>> "Is consciousness measurable, or is it just a philosophical concept? If
an AI claims to be conscious, how do we know that it's not simply faking/imitating
consciousness? Is there something that I could challenge it with to prove/disprove
consciousness?"
>>
>> What do you think? Interesting question.
>>
>> -J.
--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/