Re: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben,

 Question: Will AGI's experience emotions like humans do?
 Answer:
 http://www.goertzel.org/dynapsyc/2004/Emotions.htm

I'm wondering whether *social* organisms are likely to have a more 
active emotional life because inner psychological states need to be 
flagged physiologically to other organisms that need to be able to read 
their states.  This will also apply across species in the case of challenge 
and response situations (buzz off or I'll bite you, etc.).  Your point about 
the physiological states operating outside the mental processes (that 
are handled by the multiverse modeller) being likely to bring on feelings 
of emotion makes sense in a situation involving trans-entity 
communication.  It would be possible for physiologically flagged 
emotional states (flushed face/body, raised hackles, bared teeth snarl, 
broad grin, aroused sexual organs, etc.) to trigger a (pre-patterned?) 
response in another organism on an organism-wide decentralised basis 
- tying in with your idea that certain responses require a degree of 
speed that precludes centralised processing.

So my guess would be that emotions in AIs would be more 
common/stronger if the AIs are *social* (ie. capable of relating to any 
other entitites ie. other AIs or with social biological entities) and they 
are able to both 'read' (and perhaps 'express/flag') psychological states 
- through 'body language' as well as verbal language.

Maybe emotions, as humans experience them, are actually a muddled 
(and therefore interesting!?) hybrid of inner confusion in the multiverse 
modelling system and also a broad patterened communication system 
for projecting and reading *psychological states* where the reason(s) 
for the state is not communicated but the existence of the state is 
regarded (subconsciously?/pre-programmed?) by one or both of the 
parties in the communication as being important.

Will AIs need to be able to share *psychological states* as opposed to 
detailed rational data with other AIs?  If AIs are to be good at 
communicating with humans, then chances are that the AIs will need to 
be able to convey some psychological states to humans since humans 
seem to want to be able to read this sort of information.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-22 Thread Ben Goertzel


Hi,

You've made two comments in two posts; I'll respond to them both together

1) that sociality may be necessary for spiritual joy to emerge in a mind

Response: Clearly sociality is one thing that can push a mind in the
direction of appreciating its oneness with the universe but I don't see
why it's the only thing that can do so  I think the basic intuitive
truths underlying spiritual traditions can be recognized by ANY mind that
is self-aware and reflective, not just by a social mind.  For instance, if a
mind introspects into the way it constructs percepts, actions and objects --
the interpenetration of the perceived and constructed worlds -- then it
can be led down the path of grokking the harmony between the inner and outer
worlds, in a way that has nothing to do with sociality.

2) that sociality will lead to more intense emotions than asociality

Response: I don't think so.  I think that emotions are largely caused by the
experience of having one's mind-state controlled by internal forces way
outside one's will  Now, in humans, some of these responses are
specifically induced by other humans or animals -- therefore some of our
emotions are explicitly social in nature.  But this doesn't imply that
emotions are necessarily social, nor that sociality is necessarily
emotional -- at least not in any obvious way that I can see

I suppose you could try to construct an argument that sociality presents
computational problems that can ONLY be dealt with by mental subsystems that
operate in an automated way, outside of the scope of human will
However, I don't at present believe this to be true...

-- Ben G



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Philip Sutton
 Sent: Sunday, February 22, 2004 9:27 AM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] AGI's and emotions


 Hi Ben,

  Question: Will AGI's experience emotions like humans do?
  Answer:
  http://www.goertzel.org/dynapsyc/2004/Emotions.htm

 I'm wondering whether *social* organisms are likely to have a more
 active emotional life because inner psychological states need to be
 flagged physiologically to other organisms that need to be able to read
 their states.  This will also apply across species in the case of
 challenge
 and response situations (buzz off or I'll bite you, etc.).  Your
 point about
 the physiological states operating outside the mental processes (that
 are handled by the multiverse modeller) being likely to bring on feelings
 of emotion makes sense in a situation involving trans-entity
 communication.  It would be possible for physiologically flagged
 emotional states (flushed face/body, raised hackles, bared teeth snarl,
 broad grin, aroused sexual organs, etc.) to trigger a (pre-patterned?)
 response in another organism on an organism-wide decentralised basis
 - tying in with your idea that certain responses require a degree of
 speed that precludes centralised processing.

 So my guess would be that emotions in AIs would be more
 common/stronger if the AIs are *social* (ie. capable of relating to any
 other entitites ie. other AIs or with social biological entities)
 and they
 are able to both 'read' (and perhaps 'express/flag') psychological states
 - through 'body language' as well as verbal language.

 Maybe emotions, as humans experience them, are actually a muddled
 (and therefore interesting!?) hybrid of inner confusion in the multiverse
 modelling system and also a broad patterened communication system
 for projecting and reading *psychological states* where the reason(s)
 for the state is not communicated but the existence of the state is
 regarded (subconsciously?/pre-programmed?) by one or both of the
 parties in the communication as being important.

 Will AIs need to be able to share *psychological states* as opposed to
 detailed rational data with other AIs?  If AIs are to be good at
 communicating with humans, then chances are that the AIs will need to
 be able to convey some psychological states to humans since humans
 seem to want to be able to read this sort of information.

 Cheers, Philip

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben,  

Why would an AGI be driven to achieve *general* harmony between 
inner and outer worlds - rather than just specific cases of congruence? 

Why would a desire for specific cases of congruence between the inner 
and outer worlds lead an AGI (that is not programmed or trained to do 
so) to appreciate (desire??) to want to be at one with the *universe* 
(when you use that term do you mean the Universe or just the outer 
world?)?  

And is a desire to seek *general* congruence between the inner and 
outser world via changing the world rather changing the self a good 
recipe for creating a megalomaniac?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-22 Thread Ben Goertzel


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Philip Sutton
 Sent: Sunday, February 22, 2004 12:41 PM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] AGI's and emotions


 Hi Ben,

 Why would an AGI be driven to achieve *general* harmony between
 inner and outer worlds - rather than just specific cases of congruence?

If one of its guiding principles is to seek maximum joy -- and (as I've
hypothesized) the intensity of a quale is proportional to the size of the
pattern to which the quale is attached -- then it will seek general harmony
because this is a bigger pattern than more specialized harmony.

 Why would a desire for specific cases of congruence between the inner
 and outer worlds lead an AGI (that is not programmed or trained to do
 so) to appreciate (desire??) to want to be at one with the *universe*
 (when you use that term do you mean the Universe or just the outer
 world?)?

The desire for inner/outer congruence is a special case of the desire for
pattern-finding, as manifested in the desires for Growth and Joy that I've
posited as desirable guiding principles...

 And is a desire to seek *general* congruence between the inner and
 outser world via changing the world rather changing the self a good
 recipe for creating a megalomaniac?

This is the sort of reason why I don't posit Joy and Growth in themselves as
the ideal ethic.

Adding Choice ot the mix provides a principle-level motivation not to impose
one's own will upon the universe without considering the wills of others as
well...

ben g

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton



Hi Ben, 

 Adding Choice to the mix provides a principle-level motivation not to
 impose one's own will upon the universe without considering the wills
 of others as well... 

Whose choice - everyone or the AGI? That has to be specified in the 
ethic - otherwise it could be the AGI only - then the AGI would 
*certainly* consider the wills of others as well but only to see that 
they did not block the will of the AGI. 


A non-carefully structured goal set leading to the pursuit of 
choice/growth/joy could still lead to a megalomaniac, seems to me.


Cheers, Philip






To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] AGI's and emotions

2004-02-22 Thread Ben Goertzel




Yes, 
of course a brief ethical slogan like "choice, growth and joy" is underspecified 
and all the terms need to be better defined, either by example or by formal 
elucidation, etc. I carry out some of this elucidation in the Encouraging 
a Positive Transcension essay that triggered this whole 
dialogue...

ben 
g

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of Philip 
  SuttonSent: Sunday, February 22, 2004 7:57 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] AGI's and 
  emotions
  Hi Ben, 
  
  
   Adding Choice to the mix provides a 
  principle-level motivation not to
   impose one's own will upon the universe without 
  considering the wills
   of others as well... 
  
  Whose choice - 
  everyone or the AGI? That has to be specified in the ethic - otherwise 
  it could be the AGI only - then the AGI would *certainly* "consider the wills 
  of others as well" but only to see that they did not block the will of 
  the AGI. 
  
  A non-carefully 
  structured goal set leading to the pursuit of choice/growth/joy could still 
  lead to a megalomaniac, seems to me.
  
  Cheers, 
  Philip
  
  
  
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to 
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]