This discussion on how to guarantee that AI be friendly seems to presume that a 
super AI and human brains will be separate entities.  I believe this will not 
be the case, and the issue will be moot.

Humans would like to be smarter: to be able to think faster, learn faster, 
communicate faster, and remember more.  We would also like to end disease, live 
longer, and otherwise overcome various limitations of the human body.  I think 
this can be achived by copying the information in our brains into more powerful 
computers.  I think people will start doing this once we develop the technology 
to do so.  Once your memories are uploaded, there will no longer be any need 
for your physical body.  In fact, it may simplify the technology to scan all 
the neurons and synapses in your brain if you don't need to be kept alive in 
the process.

Shane Legg makes some powerful, and I think logical, arguments that we cannot, 
and should not, try to control an entity that is smarter than us.  First, it 
will be smart enough to evade our attempts to control it.  Second, our 
decisions are bound to be worse.  Such an entity may decide that the extinction 
of the human race is in our best interest.  If you are not prepared for such a 
possibility, then we should not build it.  But it will be our loss.  When you 
die, your memories will be destroyed.
 
-- Matt Mahoney, [EMAIL PROTECTED]



-------
AGIRI.org hosts two discussion lists: http://www.agiri.org/email
[singularity] = more general, [agi] = more technical

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to