Hi Jiri,
OK, I pondered it for a while and the answer is -- "failure modes".
Your logic is correct. If I were willing take all of your assumptions as
always true, then I would agree with you. However, logic, when it relies upon
single chain reasoning is relatively fragile. And when it rests upon bad
assumptions, it can be just a roadmap to disaster.
I believe that it is very possible (nay, very probable) for an "Artificial
Program Solver" to end up with a goal that was not intended by you. This can
happen in any number of ways from incorrect reasoning in an imperfect world to
robots rights activists deliberately programming pro-robot goals into them.
Your statement "Allowing other sources of high level goals = potentially asking
for conflicts." is undoubtedly true but believing that you can stop all other
sources of high level goals is . . . . simply incorrect.
Now, look at how I reacted to your initial e-mail. My logic said "Cool!
Let's go implement this." My intuition/emotions said "Wait a minute. There's
something wonky here. Even if I can't put my finger on it, maybe we'd better
hold up until we can investigate this further". Now -- which way would you
like your Jupiter brain to react?
Richard Loosemoore has suggested on this list that Friendliness could also
be implemented as a large number of loose constraints. I view emotions as sort
of operating this way and, in part, serving this purpose. Further, recent
brain research makes it quite clear that human beings have two clear and
distinct sources of "morality" -- both logical and emotional
(http://www.slate.com/id/2162998/pagenum/all/#page_start). This is, in part,
what I was thinking of when I listed "b) provide pre-programmed constraints
(for when logical reasoning doesn't have enough information)" as one of the
reasons why emotion was required.
I would strongly argue that an intelligence with well-designed feelings is
far, far more likely to stay Friendly than an intelligence without feelings --
and I would argue that there is substantial evidence for this as well in our
perception of and stories about "emotionless" people.
Mark
P.S. Great discussion. Thank you.
----- Original Message -----
From: Jiri Jelinek
To: [email protected]
Sent: Tuesday, May 01, 2007 6:21 PM
Subject: Re: [agi] Pure reason is a disease.
Mark,
>I understand your point but have an emotional/ethical problem with it. I'll
have to ponder that for a while.
Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.
>> For humans - yes, for our artificial problem solvers - emotion is a
disease.
>What if the emotion is solely there to enforce our goals?
>Or maybe better ==> Not violate our constraints = comfortable, violate our
constraints = feel discomfort/sick/pain.
Intelligence is meaningless without discomfort. Unless your PC gets some sort
of "feel card", it cannot really prefer, cannot set goal(s), and cannot have
"hard feelings" about working extremely hard for you. You can a) spend time
figuring out how to build the card, build it, plug it in, and (with potential
risks) tune it to make it friendly enough so it will actually come up with
goals that are compatible enough with your goals *OR* b) you can "simply" tell
your "feeling-free" AI what problems you want it to work on. Your choice.. I
hope we are eventually not gonna end up asking the "b)" solutions how to clean
up a great mess caused by the "a)" solutions.
Best,
Jiri Jelinek
On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> emotions.. to a) provide goals.. b) provide pre-programmed constraints,
and c) enforce urgency.
> Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. > For sub-goals, AI can go with
reasoning.
Hmmm. I understand your point but have an emotional/ethical problem with
it. I'll have to ponder that for a while.
> For humans - yes, for our artificial problem solvers - emotion is a
disease.
What if the emotion is solely there to enforce our goals? Fulfill our
goals = be happy, fail at our goals = be *very* sad. Or maybe better ==> Not
violate our constraints = comfortable, violate our constraints = feel
discomfort/sick/pain.
----- Original Message -----
From: Jiri Jelinek
To: [email protected]
Sent: Tuesday, May 01, 2007 2:29 PM
Subject: Re: [agi] Pure reason is a disease.
>emotions.. to a) provide goals.. b) provide pre-programmed constraints,
and c) enforce urgency.
Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.
>Pure reason is a disease
For humans - yes, for our artificial problem solvers - emotion is a
disease.
Jiri Jelinek
On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:
>> My point, in that essay, is that the nature of human emotions is
rooted in the human brain architecture,
I'll agree that human emotions are rooted in human brain
architecture but there is also the question -- is there something analogous to
emotion which is generally necessary for *effective* intelligence? My answer
is a qualified but definite yes since emotion clearly serves a number of
purposes that apparently aren't otherwise served (in our brains) by our pure
logical reasoning mechanisms (although, potentially, there may be something
else that serves those purposes equally well). In particular, emotions seem
necessary (in humans) to a) provide goals, b) provide pre-programmed
constraints (for when logical reasoning doesn't have enough information), and
c) enforce urgency.
Without looking at these things that emotions provide, I'm not sure
that you can create an *effective* general intelligence (since these roles need
to be filled by *something*).
>> Because of the difference mentioned in the prior paragraph, the
rigid distinction between emotion and reason that exists in the human brain
will not exist in a well-design AI.
Which is exactly why I was arguing that emotions and reason (or
feeling and thinking) were a spectrum rather than a dichotomy.
----- Original Message -----
From: Benjamin Goertzel
To: [email protected]
Sent: Tuesday, May 01, 2007 1:05 PM
Subject: Re: [agi] Pure reason is a disease.
On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> Well, this tells you something interesting about the human
cognitive architecture, but not too much about intelligence in general...
How do you know that it doesn't tell you much about intelligence in
general? That was an incredibly dismissive statement. Can you justify it?
Well I tried to in the essay that I pointed to in my response.
My point, in that essay, is that the nature of human emotions is
rooted in the human brain architecture, according to which our systemic
physiological responses to cognitive phenomena ("emotions") are rooted in
primitive parts of the brain that we don't have much conscious introspection
into. So, we actually can't reason about the intermediate conclusions that go
into our emotional reactions very easily, because the "conscious, reasoning"
parts of our brains don't have the ability to look into the intermediate
results stored and manipulated within the more primitive "emotionally reacting"
parts of the brain. So our deliberative consciousness has choice of either
-- accepting not-very-thoroughly-analyzable outputs from the
emotional parts of the brain
or
-- rejecting them
and doesn't have the choice to focus deliberative attention on the
intermediate steps used by the emotional brain to arrive at its conclusions.
Of course, through years of practice one can learn to bring more and
more of the emotional brain's operations into the scope of conscious
deliberation, but one can never do this completely due to the structure of the
human brain.
On the other hand, an AI need not have the same restrictions. An AI
should be able to introspect into the intermediary conclusions and
manipulations used to arrive at its "feeling responses". Yes there are
restrictions on the amount of introspection possible, imposed by computational
resource limitations; but this is different than the blatant and severe
architectural restrictions imposed by the design of the human brain.
Because of the difference mentioned in the prior paragraph, the rigid
distinction between emotion and reason that exists in the human brain will not
exist in a well-design AI.
Sorry for not giving references regarding my analysis of the human
cognitive/neural system -- I have read them but don't have the reference list
at hand. Some (but not a thorough list) are given in the article I referenced
before.
-- Ben G
----------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
--------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
----------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
------------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936