Richard in your November 02, 2007 11:15 AM post you stated:

"If AI systems are built with motivation systems that are stable, then we
could predict that they will remain synchronized with the goals of the
human race until the end of history."

and

"I can think of many, many types of non-goal-stack motivational systems
for which [Matt's statement about the inherent instability of goal systems
of recursively self improving AGIs] is a complete falsehood."

In your 11/3/2007 1:17 PM post you described what I assume to be such a
suppostedly stable  "non-goal-stack motivational system." as follows:

" consider the motivational system of the
best kind of AGI:  it is motivated by a balanced set of desires that
include the desire to explore and learn, and empathy for the human
species.  By definition, I would think, this simple cluster of desires
and empathic motivations *are* the things that "give it pleasure".

and

"I think that in general, making the AGI as similar to us as possible
(but without the aggressive and dangerous motivations that we are
victims of) would be a good idea simply because we want them to start
out with a strong empathy for us, and we want them to stay that way."

I think this type of motivational system makes a lot of sense, but for all
the reasons stated in my Fri 11/2/2007 2:07 PM post (arguments you have
not responded to) as well as many other reasons, it does appear at all
certain such a motivational system would reliably remain stable and
"synchronized with the goals of the human race until the end of history,"
as you claim.

For example, humans might for short sighted personal gain (such as when
using them in weapon systems) or accidentally alter such a motivational
system.  Or over time the inherent biases that were designed to make AGI's
have empathy for humans, might cause it to have empathy for some humans
more than others, or might cause them to make decisions that they think
are in our best interest, but would not.  Or perhaps AGI robots would
begin to embody the "human features" that they have been taught to be
empathetic to better than people.  Etc.

The world is too complicated and is going to change too rapidly in the
next one hundred, one thousand, or ten thousand years for any goal system
designed circa 2015 to remain appropriate until the end of history -
unless history ends pretty soon.

If I am wrong I would appreciate the enlightenment and increased hope that
would come with being shown how I am wrong.

Ed Porter

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 1:17 PM
To: [email protected]
Subject: Re: [agi] Can humans keep superintelligences under control -- can
superintelligence-augmented humans compete


Jiri Jelinek wrote:
>> People will want to enjoy life:  yes.  And they should, of course.
>> But so, of course, will the AGIs.
>
> Giving AGI the ability to enjoy = potentially asking for serious
> trouble. Why shouldn't AGI just work for us like other tools we
> currently have (no joy involved)?

Isn't there a fundamental contradiction in the idea of something that
can be a "tool" and also be "intelligent"?  What I mean is, is the word
"tool" usable in this context?

To put it the other way around, consider the motivational system of the
best kind of AGI:  it is motivated by a balanced set of desires that
include the desire to explore and learn, and empathy for the human
species.  By definition, I would think, this simple cluster of desires
and empathic motivations *are* the things that "give it pleasure".

But the thing is, you can change your mind to go and get pleasure in a
different way sometimes.  For example, you could decide to transfer your
mind into the cognitive system of an artificial tiger for a week, and
during that time you would get pleasure from stalking and jumping onto
predator animals, or basking in the sun, or meeting lady tigers.  After
automatically being yanked back into human mental form again at the end
of the holiday, would you say that "you" get pleasure from hunting
predators, etc?  Do you get pleasure from the idea of [exploring
different sensoria]?  I think the latter would be true, and in the same
way an AGI, being quite close to us in design, could get pleasure from
[exploring different sensoria] without it changing the goals or
motivations of the AGI when it was being its native self.

I think that in general, making the AGI as similar to us as possible
(but without the aggressive and dangerous motivations that we are
victims of) would be a good idea simply because we want them to start
out with a strong empathy for us, and we want them to stay that way.

Does this make sense?

I agree that this is a complicated area, little explored before now.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60766580-a72fe0

Reply via email to