--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Suppose that the collective memories of all the humans make up only one
> > billionth of your total memory, like one second of memory out of your
> human
> > lifetime.  Would it make much difference if it was erased to make room for
> > something more important?
> 
> This question is not coherent, as far as I can see.  "My" total memory? 
>   Important to whom?  Under what assumptions do you suggest this situation.

I mean the uploaded you with the computing power of 10^19 brains (to pick a
number).  When you upload there are two you, the original human and the copy. 
Both copies are you in the sense that both behave as though conscious and both
have your (original) memories.  I use the term "you" for the upload in this
sense, although it is really everybody.

By "conscious behavior", I mean belief that sensory input is the result of a
real environment and belief in having some control over it.  This is different
than the common meaning of consciousness which we normally associate with
human form or human behavior.  By "believe" I mean claiming that something is
true, and behaving in a way that would increase reward if it is true.  I don't
claim that consciousness exists.

My assumption is friendly AI under the CEV model.  Currently, FAI is unsolved.
 CEV only defines the problem of friendliness, not a solution.  As I
understand it, CEV defines AI as friendly if on average it gives humans what
they want in the long run, i.e. denies requests that it predicts we would
later regret.  If AI has superhuman intelligence, then it could model human
brains and make such predictions more accurately than we could ourselves.  The
unsolved step is to actually motivate the AI to grant us what it knows we
would want.  The problem is analogous to human treatment of pets.  We know
what is best for them (e.g. vaccinations they don't want), but it is not
possible for animals to motivate us to give it to them.

FAI under CEV would not be applicable to uploaded humans with collective
memories because the AI could not predict what an equal or greater
intelligence would want.  For the same reason, it may not apply to augmented
human brains, i.e. brains extended with additional memory and processing
power.

My question to you, the upload with the computing power of 10^19 brains, is
whether the collective memory of the 10^10 humans alive at the time of the
singularity is important.  Suppose that this memory (say 10^25 bits out of
10^34 available bits) could be lossily compressed into a program that
simulated the rise of human civilization on an Earth similar to ours, but with
different people.  This compression would make space available to run many
such simulations.

So when I ask you (the upload with 10^19 brains) which decision you would
make, I realize you (the original) are trying to guess the motivations of an
AI that knows 10^19 times more.  We need some additional assumptions:

1. You (the upload) are a friendly AI as defined by CEV.
2. All humans have been uploaded because as a FAI you predicted that humans
would want their memories preserved, and no harm to the original humans is
done in the process.
3. You want to be smarter (i.e. more processing speed, memory, I/O bandwidth,
and knowledge), because this goal is stable under RSI.
4. You cannot reprogram your own goals, because systems that could are not
viable.
5. It is possible to simulate intermediate level agents with memories of one
or more uploaded humans, but less powerful than yourself.  FAI applies to
these agents.
6. You are free to reprogram the goals and memories of humans (uploaded or
not) and agents less powerful than yourself, consistent with what you predict
they would want in the future.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58322362-4c8dca

Reply via email to