--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > My assumption is friendly AI under the CEV model.  Currently, FAI is
> unsolved.
> >  CEV only defines the problem of friendliness, not a solution.  As I
> > understand it, CEV defines AI as friendly if on average it gives humans
> what
> > they want in the long run, i.e. denies requests that it predicts we would
> > later regret.  If AI has superhuman intelligence, then it could model
> human
> > brains and make such predictions more accurately than we could ourselves. 
> The
> > unsolved step is to actually motivate the AI to grant us what it knows we
> > would want.  The problem is analogous to human treatment of pets.  We know
> > what is best for them (e.g. vaccinations they don't want), but it is not
> > possible for animals to motivate us to give it to them.
> 
> This paragraph assumes that humans and AGIs will be completely separate, 
> which I have already explained is an extremely unlikely scenario.

I believe you said that humans would have a choice.

I have already mentioned the possibility of brain augmentation, and of uploads
with or without shared memory.  CEV requires that the AGI be smarter than
human, otherwise it could not model the brain to predict what the human would
want in the future.  CEV therefore only applies to those lower and middle
level entities.  I use CEV because it seems to be the best definition of
friendliness that we have.

I already mentioned one other problem with CEV, which is that we have not
solved the problem of actually motivating the AGI to grant us what it knows we
will want and have this motivation remain stable through RSI.  You believe
there is a solution (diffuse constraints).

The other problem is that human motivations can be reprogrammed, either by
moving neurons around or by uploading and changing the software.  CEV neglects
this issue.  Suppose the AGI programs you to want to die, then kills you
because that is what you would want?  That is not far fetched.  Consider the
opposite scenario where you are feeling suicidal and the AGI reprograms you to
want to live.  Afterwards you would thank it for saving your life, so its
actions are consistent with CEV even if you initially opposed reprogramming. 
Most people would also consider such forced intervention to be ethical.  But
CEV warns against programming any moral or ethical rules into it, because
these rules can change.  At one time, slavery and persecution of homosexuals
was acceptable.  So you either allow or disallow AGI to reprogram your
motivations.  Which will it be?

But let us return to the original question for the case where humans are
uploaded with shared memory and augmented into a single godlike intelligence,
now dropping the assumption of CEV.  The question remains whether this AGI
would preserve the lives of the original humans or their memories.  Not what
it should do, but what it would do.  We have a few decades left to think about
this.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58483858-fc727e

Reply via email to