Mike

In case you're curious I wrote down my theory of
emotions here

http://www.goertzel.org/dynapsyc/2004/Emotions.htm

(an early version of text that later became a chapter in The
Hidden Pattern)

Among the conclusions my theory of emotions leads to are, as stated there:

*****
    * AI systems clearly will have emotions
    * Their emotions will include, at least, happiness and sadness and
spiritual joy
    * Generally AI systems will probably experience less intense
emotions than humans, because they can have more robust virtual
multiverse modeling components, which are not so easily bollixed up –
so they'll less often have the experience of major
non-free-will-related mental-state shifts
    * Experiencing less intense emotions does not imply experiencing
less intense states of consciousness.  Emotion is only one particular
species of state-of-consciousness.
    * The specific emotions AI systems will experience will probably
be quite different from those of humans, and will quite possibly vary
widely among different AI systems
    * If you put an AI in a human-like body with the same sorts of
needs as primordial humans, it would probably develop every similar
emotions to the human ones

*****


-- Ben

On Dec 12, 2007 9:27 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> I don't think you've answered my point - which perhaps wasn't put well
> enough.
>
> All you propose, as far as I can see, is to apply *values* to behaviour - to
> apply positive and negative figures to behaviours considered beneficial or
> detrimental, and thus affect the system's further behaviour - reinforcing
> it, for example.
>
> It is more or less like a value approach to investing on stocks on the
> stockmarket - when their value goes up or down, a formula determines whether
> the system buys more or less shares.
>
> But this is a purely numerical approach to altering behaviour. There is
> nothing a priori wrong with it - although, in fact, (although this is a more
> complex argument which I won't really go into), it would never actually work
> for AGI, which has to deal with problems where it is impossible to apply
> precise or reliable values.
>
> But the important point here is that these *values* are not *emotions* at
> all. They're fundamentally different entities and  affect behaviour in
> fundamentally different ways - your values, for example, will not cause any
> pleasure or pain to a self, or have a corporeal, hormonal nature, or
> conflict. You, like others, are trying to invest your value system with a
> complexity and dignity that it simply hasn't got and has no right to. It's
> absurd - you might just as well talk of every plus or minus sign in a
> mathematical calculation as conferring pleasure or pain.
>
> It also shows a very limited understanding of emotions.
>
>
>
> Matt: Mike Tintner <[EMAIL PROTECTED]> wrote:
>
> > Matt: I don't believe that the ability to feel pleasure and pain depends
> > on
> > > consciousness.  That is just a circular definition.
> > > http://en.wikipedia.org/wiki/Philosophical_zombie
> >
> > Richard:It is not circular.  Consciousness and pleasure/pain are both
> > subjective
> > issues.  They can resolved together.
> >
> > Both of you, in fairly standard fashion, are approaching humans and
> > animals
> > as if they were dissected on a table with consciousness/ emotions/
> > pleasure
> > & pain lying around.
> >
> > The reality is that we are integrated systems in which -
> >
> > a self
> >
> > is continually subjected to
> >
> > and feels  (or  to some extent may choose not to feel)
> >
> > emotions (involving pleasure/pain)
> >
> > via a (two-way) nervous system.
> >
> > The questions Matt has to answer is:
> >
> > 1) are the systems you envisage going to have a self (to feel emotions) -
> > and if so, why?
>
> No, I am proposing a measure of reinforcement for intelligence in general,
> whether human, animal, or machine, all of which fall under Legg and Hutter's
> universal intelligence ( http://www.vetta.org/documents/ui_benelearn.pdf ),
> which is based on Hutter's AIXI model (
> http://www.hutter1.net/ai/aixigentle.htm ).  In this model, an agent and an
> environment are modeled by a pair of interactive Turing machines exchanging
> symbols.  In addition, the environment sends a utility or reinforcement
> signal
> to the agent at each step.  The goal of the agent is to maximize the
> accumulated utility.  The paper on universal intelligence (UI) proposes
> defining intelligence as the expected accumulated utility for a randomly
> chosen environment (from a Solomonoff distribution of environments, i.e.
> self
> delimiting Turing machines chosen by coin flips).  Hutter's AIXI model shows
> that the most intelligent strategy is to guess at each step that the
> environment is simulated by the shortest program consistent with the
> observed
> interaction so far.  However, AIXI is not computable.
>
> In humans, it is natural to think of positive utility or reinforcement as a
> "reward" signal or pleasure, and negative utility as a penalty, such as
> pain.
> In this respect, humans seek to maximize expected accumulated utility.  But
> this is not quite right because utility has no scale in the AIXI/UI model.
> If
> you double a reward (e.g. food or money) or punishment (e.g. electric shock)
> to a human or animal, you approximately double the change in behavior.  But
> in
> the AIXI/UI model, if you double the utility signal, the agent's strategy
> does
> not change.
>
> I propose a measure of a bound on reinforcement which is more consistent
> with
> our intuitive notion of pain and pleasure.  The strength of a signal is
> bounded by the change in the state of the agent, the amount of information
> learned, as measured by Kolmogorov complexity.  This bound is consistent
> with
> intuition.  For example, a person under anesthesia feels no pain during
> surgery and also has no memory (learning) during this time.  Drugs that
> increase the rate of learning (synaptic changes), such as hallucinogens,
> also
> heighten sensations of both pain and pleasure.  Children learn faster than
> adults, and also react more strongly to pain and pleasure.
>
> Allow me to distinguish between utility and reinforcement as follows.  An
> agent's goal is to maximize utility, but utility is independent of the
> agent's
> behavior, and has no scale.  Reinforcement depends on the agent, such that
> if
> an agent's state changes from S1 to S2 as the result of reinforcement R,
> then
> |R| <= K(S2|S1), the number of bits needed to describe the state change.
>
> If you accept this definition then you could say that a human has 1000 times
> more capacity to experience pleasure or pain than a mouse because a human
> brain is 1000 times larger and therefore can learn 1000 times more.
> Likewise,
> if humans can learn 10^9 bits and autobliss (
> http://www.mattmahoney.net/autobliss.txt ) can learn 10^2 bits, then
> autobliss
> experiences 10^-7 as much pain or pleasure as a human.
>
> You can interpret this how you wish.  I make no claims about the morality of
> inflicting pain on animals or programs.  Morality is an evolved cultural
> belief.  We believe in compassion to other humans because tribes that
> practiced this belief (toward their own members) were more successful than
> those that didn't.  Likewise, we eat animals.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.503 / Virus Database: 269.17.1/1181 - Release Date: 12/11/2007
> 5:05 PM
>
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=75490094-e662a4

Reply via email to