Stathis Papaioannou wrote:
Brent Meeker writes:
> OK, an AI needs at least motivation if it is to do anything, and we
> could call motivation a feeling or emotion. Also, some sort of
hierarchy > of motivations is needed if it is to decide that saving
the world has > higher priority than putting out the garbage. But what
reason is there > to think that an AI apparently frantically trying to
save the world > would have anything like the feelings a human would
under similar > circumstances? It might just calmly explain that
saving the world is at > the top of its list of priorities, and it is
willing to do things which > are normally forbidden it, such as
killing humans and putting itself at > risk of destruction, in order
to attain this goal. How would you add > emotions such as fear, grief,
regret to this AI, given that the external > behaviour is going to be
the same with or without them because the > hierarchy of motivation is
You are assuming the AI doesn't have to exercise judgement about
secondary objectives - judgement that may well involve conflicts of
values that have to resolve before acting. If the AI is saving the
world it might for example, raise it's cpu voltage and clock rate in
order to computer faster - electronic adrenaline. It might cut off
some peripheral functions, like running the printer. Afterwards it
might "feel regret" when it cannot recover some functions.
Although there would be more conjecture in attributing these feelings
to the AI than to a person acting in the same situation, I think the
principle is the same. We think the persons emotions are part of the
function - so why not the AI's too.
Do you not think it is possible to exercise judgement with just a
hierarchy of motivation?
Yes and no. It is possible given arbitrarily long time and other resources to
work out the consequences, or at least a best estimate of the consequences, of
actions. But in real situations the resources are limited (e.g. my brain
power) and so decisions have to be made under uncertainity and tradeoffs of
uncertain risks are necessary: should I keep researching or does that risk
being too late with my decision? So it is at this level that we encounter
conflicting values. If we could work everything out to our own satisfaction
maybe we could be satisfied with whatever decision we reached - but life is
short and calculation is long.
Alternatively, do you think a hierarchy of
motivation will automatically result in emotions?
I think motivations are emotions.
For example, would
something that the AI is strongly motivated to avoid necessarily cause
it a negative emotion,
Generally contemplating something you are motivated to avoid - like your own
death - is accompanied by negative feelings. The exception is when you
contemplate your narrow escape. That is a real high!
and if so what would determine if that negative
emotion is pain, disgust, loathing or something completely different
that no biological organism has ever experienced?
I'd assess them according to their function in analogy with biological system
experiences. Pain = experience of injury, loss of function. Disgust = the
assessment of extremely negative value to some event, but without fear.
Loathing = the external signaling of disgust. Would this assessment be
accurate? I dunno and I suspect that's a meaningless question.
"As men's prayers are a disease of the will, so are their creeds a disease of the
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at