Mark,

logic, when it relies upon single chain reasoning is relatively fragile.
And when it rests upon bad assumptions, it can be just a roadmap to
disaster.

It all improves with learning. In my design (not implemented yet), AGI
learns from stories and (assuming it learned enough) can complete incomplete
stories.

e.g:
Story name: $tory
[1] Mark has $0.
[2] ..[to be generated by AGI]..
[3] Mark has $1M.

As the number of learned/solved stories grows, better/different solutions
can be generated.

I believe that it is very possible (nay, very probable) for an "Artificial
Program Solver" to end up with a goal that was not intended by you.

For emotion/feeling enabled AGI - possibly.
For feeling-free AGI - only if it's buggy.

Distinguish:
a) given goals (e.g the [3]) and
b) generated sub-goals.

In my system, there is an admin feature that can restrict both for
lower-level users. Besides that, to control b), I go with subject-level and
story-level user-controlled profiles (inheritance supported). For example,
if Mark is linked to a "Life lover" profile that includes the "Never Kill"
rule, the sub-goal queries just exclude the Kill action. Rule breaking would
just cause invalid solutions nobody is interested in. I'm simplifying a bit,
but, bottom line - both a) & b) can be controlled/restricted.

believing that you can stop all other sources of high level goals is . . .
. simply incorrect.

IMO depends on design and on the nature & number of users involved.

Now, look at how I reacted to your initial e-mail.  My logic said "Cool!
Let's go implement this."  My intuition/emotions said "Wait a minute.
There's something wonky here.  Even if I can't put my finger on it, maybe
we'd better hold up until we can investigate this further".  Now -- which
way would you like your Jupiter brain to react?

See, you had a conflict in your mind. Our brains are sort of messed up. In a
single brain, we have more/less independent lines of thinking on multiple
levels combined with various data-visibility and thought-line-compare
issues. I know, lots of data to process - especially for real-time solutions
- so maybe the mother nature had to sacrifice conflict-free design for
faster thinking (after all, it more-less works), but I don't think it needs
to be that way for AGI. If one line of thought is well done, you don't have
conflicts and don't need the other (if well done, those would return the
same results).

Richard Loosemoore has suggested on this list that Friendliness could also
be implemented as a large number of loose constraints.

I agree with that

I view emotions as sort of operating this way and, in part, serving this
purpose.

Paul Ekman's list of emotions:

   * anger
   * fear
   * sadness
   * happiness
   * disgust

When it comes to those emotions, AGI IMO just should be able to
learn/understand related behavior of various creatures. Nothing more or
less.

Further, recent brain research makes it quite clear that human beings have
two clear and distinct sources of "morality" -- both logical and emotional

poor design from my perspective..

I would strongly argue that an intelligence with well-designed feelings is
far, far more likely to stay Friendly than an intelligence without feelings

AI without feelings (unlike its user) cannot really get unfriendly.
It's just a tool (like a knife).

how giving a goal of "avoid x" is truly *different* from discomfort

It's the "do" vs "NEED to do".
Discomfort requires an extra sensor supporting the ability to prefer on its
own.

Jiri



On 5/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

 Hi Jiri,

    OK, I pondered it for a while and the answer is -- "failure modes".

    Your logic is correct.  If I were willing take all of your assumptions
as always true, then I would agree with you.  However, logic, when it relies
upon single chain reasoning is relatively fragile.  And when it rests upon
bad assumptions, it can be just a roadmap to disaster.

    I believe that it is very possible (nay, very probable) for an
"Artificial Program Solver" to end up with a goal that was not intended by
you.  This can happen in any number of ways from incorrect reasoning in an
imperfect world to robots rights activists deliberately programming
pro-robot goals into them.  Your statement "Allowing other sources of high
level goals = potentially asking for conflicts." is undoubtedly true but
believing that you can stop all other sources of high level goals is . . . .
simply incorrect.

    Now, look at how I reacted to your initial e-mail.  My logic said
"Cool!  Let's go implement this."  My intuition/emotions said "Wait a
minute.  There's something wonky here.  Even if I can't put my finger on it,
maybe we'd better hold up until we can investigate this further".  Now --
which way would you like your Jupiter brain to react?

    Richard Loosemoore has suggested on this list that Friendliness could
also be implemented as a large number of loose constraints.  I view emotions
as sort of operating this way and, in part, serving this purpose.  Further,
recent brain research makes it quite clear that human beings have two clear
and distinct sources of "morality" -- both logical and emotional (
http://www.slate.com/id/2162998/pagenum/all/#page_start).  This is, in
part, what I was thinking of when I listed "b) provide pre-programmed
constraints (for when logical reasoning doesn't have enough information)" as
one of the reasons why emotion was required.

    I would strongly argue that an intelligence with well-designed
feelings is far, far more likely to stay Friendly than an intelligence
without feelings -- and I would argue that there is substantial evidence for
this as well in our perception of and stories about "emotionless" people.

        Mark

P.S.  Great discussion.  Thank you.

----- Original Message -----
*From:* Jiri Jelinek <[EMAIL PROTECTED]>
*To:* [email protected]
*Sent:* Tuesday, May 01, 2007 6:21 PM
*Subject:* Re: [agi] Pure reason is a disease.

Mark,

>I understand your point but have an emotional/ethical problem with it. I'll
have to ponder that for a while.

Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.

>> For humans - yes, for our artificial problem solvers - emotion is a
disease.
>What if the emotion is solely there to enforce our goals?
>Or maybe better ==> Not violate our constraints = comfortable, violate
our constraints = feel discomfort/sick/pain.

Intelligence is meaningless without discomfort. Unless your PC gets some
sort of "feel card", it cannot really prefer, cannot set goal(s), and cannot
have "hard feelings" about working extremely hard for you. You can a) spend
time figuring out how to build the card, build it, plug it in, and (with
potential risks) tune it to make it friendly enough so it will actually come
up with goals that are compatible enough with your goals *OR* b) you can
"simply" tell your "feeling-free" AI what problems you want it to work on.
Your choice.. I hope we are eventually not gonna end up asking the "b)"
solutions how to clean up a great mess caused by the "a)" solutions.

Best,
Jiri Jelinek

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>  >> emotions.. to a) provide goals.. b) provide pre-programmed
> constraints, and c) enforce urgency.
> > Our AI = our tool = should work for us = will get high level goals (+
> urgency info and constraints) from us. Allowing other sources of high level
> goals = potentially asking for conflicts. > For sub-goals, AI can go with
> reasoning.
>
> Hmmm.  I understand your point but have an emotional/ethical problem
> with it.  I'll have to ponder that for a while.
>
> > For humans - yes, for our artificial problem solvers - emotion is a
> disease.
> What if the emotion is solely there to enforce our goals?  Fulfill our
> goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==>
> Not violate our constraints = comfortable, violate our constraints = feel
> discomfort/sick/pain.
>
>  ----- Original Message -----
> *From:* Jiri Jelinek <[EMAIL PROTECTED]>
> *To:* [email protected]
>  *Sent:* Tuesday, May 01, 2007 2:29 PM
> *Subject:* Re: [agi] Pure reason is a disease.
>
> >emotions.. to a) provide goals.. b) provide pre-programmed constraints,
> and c) enforce urgency.
>
> Our AI = our tool = should work for us = will get high level goals (+
> urgency info and constraints) from us. Allowing other sources of high level
> goals = potentially asking for conflicts. For sub-goals, AI can go with
> reasoning.
>
> >Pure reason is a disease
>
> For humans - yes, for our artificial problem solvers - emotion is a
> disease.
>
> Jiri Jelinek
>
>  On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:
>
> >  >> My point, in that essay, is that the nature of human emotions is
> > rooted in the human brain architecture,
> >
> >     I'll agree that human emotions are rooted in human brain
> > architecture but there is also the question -- is there something analogous
> > to emotion which is generally necessary for *effective* intelligence?  My
> > answer is a qualified but definite yes since emotion clearly serves a number
> > of purposes that apparently aren't otherwise served (in our brains) by our
> > pure logical reasoning mechanisms (although, potentially, there may be
> > something else that serves those purposes equally well).  In particular,
> > emotions seem necessary (in humans) to a) provide goals, b) provide
> > pre-programmed constraints (for when logical reasoning doesn't have enough
> > information), and c) enforce urgency.
> >
> >     Without looking at these things that emotions provide, I'm not
> > sure that you can create an *effective* general intelligence (since these
> > roles need to be filled by *something*).
> >
> > >> Because of the difference mentioned in the prior paragraph, the
> > rigid distinction between emotion and reason that exists in the human brain
> > will not exist in a well-design AI.
> >
> >     Which is exactly why I was arguing that emotions and reason (or
> > feeling and thinking) were a spectrum rather than a dichotomy.
> >
> >  ----- Original Message -----
> > *From:* Benjamin Goertzel <[EMAIL PROTECTED]>
> > *To:* [email protected]
> > *Sent:* Tuesday, May 01, 2007 1:05 PM
> > *Subject:* Re: [agi] Pure reason is a disease.
> >
> >
> >
> >  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > >
> > >  >> Well, this tells you something interesting about the human
> > > cognitive architecture, but not too much about intelligence in general...
> > >
> > > How do you know that it doesn't tell you much about intelligence in
> > > general?  That was an incredibly dismissive statement.  Can you justify 
it?
> > >
> >
> >
> > Well I tried to in the essay that I pointed to in my response.
> >
> > My point, in that essay, is that the nature of human emotions is
> > rooted in the human brain architecture, according to which our systemic
> > physiological responses to cognitive phenomena ("emotions") are rooted in
> > primitive parts of the brain that we don't have much conscious introspection
> > into.  So, we actually can't reason about the intermediate conclusions that
> > go into our emotional reactions very easily, because the "conscious,
> > reasoning" parts of our brains don't have the ability to look into the
> > intermediate results stored and manipulated within the more primitive
> > "emotionally reacting" parts of the brain.  So our deliberative
> > consciousness has choice of either
> >
> > -- accepting not-very-thoroughly-analyzable outputs from the emotional
> > parts of the brain
> >
> > or
> >
> > -- rejecting them
> >
> > and doesn't have the choice to focus deliberative attention on the
> > intermediate steps used by the emotional brain to arrive at its conclusions.
> >
> >
> > Of course, through years of practice one can learn to bring more and
> > more of the emotional brain's operations into the scope of conscious
> > deliberation, but one can never do this completely due to the structure of
> > the human brain.
> >
> > On the other hand, an AI need not have the same restrictions.  An AI
> > should be able to introspect into the intermediary conclusions and
> > manipulations used to arrive at its "feeling responses".  Yes there are
> > restrictions on the amount of introspection possible, imposed by
> > computational resource limitations; but this is different than the blatant
> > and severe architectural restrictions imposed by the design of the human
> > brain.
> >
> > Because of the difference mentioned in the prior paragraph, the rigid
> > distinction between emotion and reason that exists in the human brain will
> > not exist in a well-design AI.
> >
> > Sorry for not giving references regarding my analysis of the human
> > cognitive/neural system -- I have read them but don't have the reference
> > list at hand. Some (but not a thorough list) are given in the article I
> > referenced before.
> >
> > -- Ben G
> > ------------------------------
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> > ------------------------------
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> >
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to