Brent Meeker writes:
> Do you not think it is possible to exercise judgement with just a
> hierarchy of motivation?
Yes and no. It is possible given arbitrarily long time and other resources to
work out the consequences, or at least a best estimate of the consequences, of
actions. But in real situations the resources are limited (e.g. my brain
power) and so decisions have to be made under uncertainity and tradeoffs of
uncertain risks are necessary: should I keep researching or does that risk
being too late with my decision? So it is at this level that we encounter
conflicting values. If we could work everything out to our own satisfaction
maybe we could be satisfied with whatever decision we reached - but life is
short and calculation is long.
You don't need to figure out the consequences of everything. You can replace
the emotions/values with a positive or negative number (or some more complex
formula where the numbers vary according to the situation, new learning, a bit
of randomness thrown in to make it all more interesting, etc.) and come up with
the same behaviour with the only motivation being to maximise the one variable.
>Alternatively, do you think a hierarchy of
> motivation will automatically result in emotions?
I think motivations are emotions.
>For example, would
> something that the AI is strongly motivated to avoid necessarily cause
> it a negative emotion,
Generally contemplating something you are motivated to avoid - like your own
death - is accompanied by negative feelings. The exception is when you
contemplate your narrow escape. That is a real high!
>and if so what would determine if that negative
> emotion is pain, disgust, loathing or something completely different
> that no biological organism has ever experienced?
I'd assess them according to their function in analogy with biological system
experiences. Pain = experience of injury, loss of function. Disgust = the
assessment of extremely negative value to some event, but without fear.
Loathing = the external signaling of disgust. Would this assessment be
accurate? I dunno and I suspect that's a meaningless question.
That you can describe these emotions in terms of their function implies that you could program a computer to behave in a similar way without actually experiencing the emotions - unless you are saying that a computer so programmed would ipso facto experience the emotions.
Consider a simple robot with photoreceptors, a central processor, and a means of locomotion which is designed to run away from bright lights: the brighter the light, the faster and further it runs. Is it avoiding the light because it doesn't like it, because it hurts its eyes, or simply because it feels inexplicably (from its point of view) compelled to do so? What would you have to do to it so that it feels the light hurts its eyes? Once you have figured out the answer to that question, would it be possible to disconnect the processor and torture it by inputting certain values corresponding to a high voltage from the photoreceptors? Would it be possible to run an emulation of the processor on a PC and torture it with appropriate data values? Would it be possible to cause it pain beyond the imagination of any biological organism by inputting megavolt quantities, since in a simulation there are no actual sensory receptors to saturate or burn out?
Be one of the first to try Windows Live Mail.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at