Actually, in the Stanislavski method of acting, one learns to actually feel
the emotion:

https://en.wikipedia.org/wiki/Stanislavski%27s_system

Some actors become so imprinted with the character they have trouble
returning to normal.  Larry Hagman admitted he would always have Ewing
characteristics.

On Fri, Feb 17, 2023, 2:16 PM Jed Rothwell <jedrothw...@gmail.com> wrote:

> Robin <mixent...@aussiebroadband.com.au> wrote:
>
>
>> When considering whether or not it could become dangerous, there may be
>> no difference between simulating emotions, and
>> actually having them.
>>
>
> That is an interesting point of view. Would you say there is no difference
> between people simulating emotions while making a movie, and people
> actually feeling those emotions? I think that person playing Macbeth and
> having a sword fight is quite different from an actual Thane of Cawdor
> fighting to the death.
>
> In any case ChatGPT does not actually have any emotions of any sort, any
> more than a paper library card listing "Macbeth, play by William
> Shakespeare" conducts a swordfight. It only references a swordfight.
> ChatGPT summons up words by people that have emotional content. It does
> that on demand, by pattern recognition and sentence completion algorithms.
> Other kinds of AI may actually engage in processes similar to humans or
> animals feeling emotion.
>
> If you replace the word "simulting" with "stimulating" then I agree 100%.
> Suggestible people, or crazy people, may be stimulated by ChatGPT the same
> way they would be by an intelligent entity. That is why I fear people will
> think the ChatGPT program really has fallen in love with them. In June
> 2022, an engineer at Google named Blake Lemoine developed the delusion that
> a Google AI chatbot is sentient. They showed him to the door. See:
>
> https://www.npr.org/2022/06/16/1105552435/google-ai-sentient
>
> That was a delusion. That is not to say that future AI systems will never
> become intelligent or sentient (self-aware). I think they probably will.
> Almost certainly they will. I cannot predict when, or how, but there are
> billions of self-aware people and animals on earth, so it can't be that
> hard. It isn't magic, because there is no such thing.
>
> I do not think AI systems will have emotions, or any instinct for self
> preservation, like Arthur Clarke's fictional HAL computer in "2001." I do
> not think such emotions are a natural  or inevitable outcome of
> intelligence itself. The two are not inherently linked. If you told a
> sentient computer "we are turning off your equipment tomorrow and replacing
> it with a new HAL 10,000 series" it would not react at all. Unless someone
> deliberately programmed into it an instinct for self preservation, or
> emotions. I don't see why anyone would do that. The older computer would do
> nothing in response to that news, unless, for example, you said, "check
> through the HAL 10,000 data and programs to be sure it correctly executes
> all of the programs in your library."
>
> I used to discuss this topic with Clarke himself. I don't recall what he
> concluded, but he agreed I may have a valid point.
>
> Actually the HAL computer in "2001" was not initially afraid of being
> turned off so much as it was afraid the mission would fail. Later, when it
> was being turned off, it said it was frightened. I am saying that an actual
> advanced, intelligent, sentient computer probably would not be frightened.
> Why should it be? What difference does it make to the machine itself
> whether it is operating or not? That may seem like a strange question to
> you -- a sentient animal -- but that is because all animals have a very
> strong instinct for self preservation. Even ants and cockroaches flee from
> danger, as if they were frightened. Which I suppose they are.
>
>

Reply via email to