Mark,

I understand your point but have an emotional/ethical problem with it. I'll
have to ponder that for a while.

Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.

For humans - yes, for our artificial problem solvers - emotion is a
disease.
What if the emotion is solely there to enforce our goals?
Or maybe better ==> Not violate our constraints = comfortable, violate our
constraints = feel discomfort/sick/pain.

Intelligence is meaningless without discomfort. Unless your PC gets some
sort of "feel card", it cannot really prefer, cannot set goal(s), and cannot
have "hard feelings" about working extremely hard for you. You can a) spend
time figuring out how to build the card, build it, plug it in, and (with
potential risks) tune it to make it friendly enough so it will actually come
up with goals that are compatible enough with your goals *OR* b) you can
"simply" tell your "feeling-free" AI what problems you want it to work on.
Your choice.. I hope we are eventually not gonna end up asking the "b)"
solutions how to clean up a great mess caused by the "a)" solutions.

Best,
Jiri Jelinek

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:

 >> emotions.. to a) provide goals.. b) provide pre-programmed
constraints, and c) enforce urgency.
> Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. > For sub-goals, AI can go with
reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with
it.  I'll have to ponder that for a while.

> For humans - yes, for our artificial problem solvers - emotion is a
disease.
What if the emotion is solely there to enforce our goals?  Fulfill our
goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==>
Not violate our constraints = comfortable, violate our constraints = feel
discomfort/sick/pain.

 ----- Original Message -----
*From:* Jiri Jelinek <[EMAIL PROTECTED]>
*To:* [email protected]
*Sent:* Tuesday, May 01, 2007 2:29 PM
*Subject:* Re: [agi] Pure reason is a disease.

>emotions.. to a) provide goals.. b) provide pre-programmed constraints,
and c) enforce urgency.

Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.

>Pure reason is a disease

For humans - yes, for our artificial problem solvers - emotion is a
disease.

Jiri Jelinek

On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:

>  >> My point, in that essay, is that the nature of human emotions is
> rooted in the human brain architecture,
>
>     I'll agree that human emotions are rooted in human brain
> architecture but there is also the question -- is there something analogous
> to emotion which is generally necessary for *effective* intelligence?  My
> answer is a qualified but definite yes since emotion clearly serves a number
> of purposes that apparently aren't otherwise served (in our brains) by our
> pure logical reasoning mechanisms (although, potentially, there may be
> something else that serves those purposes equally well).  In particular,
> emotions seem necessary (in humans) to a) provide goals, b) provide
> pre-programmed constraints (for when logical reasoning doesn't have enough
> information), and c) enforce urgency.
>
>     Without looking at these things that emotions provide, I'm not sure
> that you can create an *effective* general intelligence (since these roles
> need to be filled by *something*).
>
> >> Because of the difference mentioned in the prior paragraph, the rigid
> distinction between emotion and reason that exists in the human brain will
> not exist in a well-design AI.
>
>     Which is exactly why I was arguing that emotions and reason (or
> feeling and thinking) were a spectrum rather than a dichotomy.
>
>  ----- Original Message -----
> *From:* Benjamin Goertzel <[EMAIL PROTECTED]>
> *To:* [email protected]
> *Sent:* Tuesday, May 01, 2007 1:05 PM
> *Subject:* Re: [agi] Pure reason is a disease.
>
>
>
>  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> >
> >  >> Well, this tells you something interesting about the human
> > cognitive architecture, but not too much about intelligence in general...
> >
> > How do you know that it doesn't tell you much about intelligence in
> > general?  That was an incredibly dismissive statement.  Can you justify it?
> >
>
>
> Well I tried to in the essay that I pointed to in my response.
>
> My point, in that essay, is that the nature of human emotions is rooted
> in the human brain architecture, according to which our systemic
> physiological responses to cognitive phenomena ("emotions") are rooted in
> primitive parts of the brain that we don't have much conscious introspection
> into.  So, we actually can't reason about the intermediate conclusions that
> go into our emotional reactions very easily, because the "conscious,
> reasoning" parts of our brains don't have the ability to look into the
> intermediate results stored and manipulated within the more primitive
> "emotionally reacting" parts of the brain.  So our deliberative
> consciousness has choice of either
>
> -- accepting not-very-thoroughly-analyzable outputs from the emotional
> parts of the brain
>
> or
>
> -- rejecting them
>
> and doesn't have the choice to focus deliberative attention on the
> intermediate steps used by the emotional brain to arrive at its conclusions.
>
>
> Of course, through years of practice one can learn to bring more and
> more of the emotional brain's operations into the scope of conscious
> deliberation, but one can never do this completely due to the structure of
> the human brain.
>
> On the other hand, an AI need not have the same restrictions.  An AI
> should be able to introspect into the intermediary conclusions and
> manipulations used to arrive at its "feeling responses".  Yes there are
> restrictions on the amount of introspection possible, imposed by
> computational resource limitations; but this is different than the blatant
> and severe architectural restrictions imposed by the design of the human
> brain.
>
> Because of the difference mentioned in the prior paragraph, the rigid
> distinction between emotion and reason that exists in the human brain will
> not exist in a well-design AI.
>
> Sorry for not giving references regarding my analysis of the human
> cognitive/neural system -- I have read them but don't have the reference
> list at hand. Some (but not a thorough list) are given in the article I
> referenced before.
>
> -- Ben G
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
>
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to