Yes. Perfect. While reading your post, I started worrying about where race conditions and deadlock
might fit. Techniques for overcoming deadlock seem to me like "executive" functions,
pragmatism, or breadth-first search. And there's no reason we can't imagine a layered
"algorithm" that
Fantastic layout! I have a few quibbles, of course. 8^D
(2) and (3) seem like (informal) consequences of *simulation*. Most of the accounts for
subjectivity I've found ... [cough] ... resonant are those that include the animal
*modeling* its world and running an error correcting process on
Glen writes:
< So we might be able to measure the "emotion" of a chatbot by observing
something like "stress" in the execution of the algorithm. If, for example,
there's a recursive "subroutine" inside it, a deep iteration of that
"subroutine" might indicate a more "stressful" prompt, whereas
Based on his Facebook info, Jan died. One of his friends gives the details.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505
505 670-9918
Santa Fe, NM
On Wed, Oct 19, 2022, 2:13 PM Douglass Carmichael
wrote:
> Hard to imagine a computer saying “that’s disgusting” with
Hard to imagine a computer saying “that’s disgusting” with conviction.
Ps: anyone know what happened to Jan Hauser? I can’t reach him from
California.He did have a heart event'
> On Oct 19, 2022, at 12:10 PM, Russ Abbott wrote:
>
> The test is whether reasonably appropriate emotions arise
The test is whether reasonably appropriate emotions arise without the
entity being told which emotions it should have--not whether it can
generate text consistent with a specified emotion.
-- Russ
On Tue, Oct 18, 2022 at 4:55 PM Prof David West
wrote:
> Maybe lack of emotion, but ability to
Emotions are related to a body. I don't think they are absolutely necessary,
but I think some kind of body is indeed necessary to develop a form of
consciousness. A body which can move around in two different but interconnected
worlds, for instance the physical or a virtual world and the world
Umm fun fact: the reason so much Ai development uses female models first is
because of a much easier timer 'reading' the Ai's emotions. Yeah yeah.
back in the day geeks being single males, was why. Now it turns out it's
much easier to predict and model female ai's . Male Ai's for what ever
On 10/18/22 10:21 PM, Marcus Daniels wrote:
A deep learning system set up for next sentence prediction, one that
consumed gigabytes of literature, would learn to mimic emotions as
expressed in writing. It would likely have mappings of context and
events to plausible emotional
This is another example of where List's "Levels of Description and Levels of Reality" paper might
help. In chronic pain research, they have a severe disadvantage compared to many other medical fields because
it straddles biological and psycho-social domains. Psycho-social measures are (mostly)
A deep learning system set up for next sentence prediction, one that consumed
gigabytes of literature, would learn to mimic emotions as expressed in writing.
It would likely have mappings of context and events to plausible emotional
descriptions. It would have latent encodings about the
*list of things Cybermen do to make even the Dr yell
RN! and book it to the TARDIS as well here*
On Tue, Oct 18, 2022 at 6:35 PM Gillian Densmore
wrote:
> *terminator soundtrack here*
>
> On Tue, Oct 18, 2022 at 5:55 PM Prof David West
> wrote:
>
>> Maybe lack of emotion,
*terminator soundtrack here*
On Tue, Oct 18, 2022 at 5:55 PM Prof David West
wrote:
> Maybe lack of emotion, but ability to 'fake it' by repeating what it read
> a being with that emotion would say only proves the AI is a sociopath or
> psychopath.
>
> davew
>
>
> On Tue, Oct 18, 2022, at 4:44
Maybe lack of emotion, but ability to 'fake it' by repeating what it read a
being with that emotion would say only proves the AI is a sociopath or
psychopath.
davew
On Tue, Oct 18, 2022, at 4:44 PM, Russ Abbott wrote:
> When Blake Lemoine claimed that LaMDA was conscious, it struck me that
When Blake Lemoine claimed that LaMDA was conscious, it struck me that one
way to test that would be to determine whether one could evoke an emotional
response from it. You can't cause it physical pain since it doesn't have
sense organs. But, one could ask it if it cares about anything. If so,
I an concurrently reading, *Nineteen Ways of Looking at Consciousness*, by
Patrick House and *Mountain in the Sea*, by Ray Nayler. The latter is fiction.
(The former, because it deals with consciousness may also be fiction, but it
purports to be neuro-scientific / philosophical.)
The novel is
There are many different measures of *types* of consciousness. But without
specifying the type, such questions are not even philosophical. They're
nonsense.
For example, the test of whether one can recognize one's image in a mirror couldn't be
performed by a chatbot. But it is one of the
Paul Buchheit asked on
Twitterhttps://twitter.com/paultoo/status/1582455708041113600"Is consciousness
measurable, or is it just a philosophical concept? If an AI claims to be
conscious, how do we know that it's not simply faking/imitating consciousness?
Is there something that I could
18 matches
Mail list logo