Samantha Atkins wrote:

 >I very very much mind.  But would I sacrifice such a vast  
 >intelligence to protect humanity?  That is a highly rhetorical  
 >question I hope to never need to answer in reality.    Whatever my  
 >answer might be it would not be automatic.    If I knew beyond a  
 >shadow of a doubt that only one of A and B could survive going  
 >forward and that A exemplified the most of everything I value by a  
 >very considerable margin and it was my own choice somehow which  
 >survived and I am a member of B, what would I do?   That is a  
 >different question from the original but seems to be what the  
 >question is taken to "really" be.   Which is fascinating.
 >
 >The question in this form is much too rhetorical and unlikely.   It  
 >is a classic "lifeboat problem".    Those are notoriously difficult  
 >to answer without appearing monstrous to someone.   In the original  
 >form of the question,  I will answer that yes, I would consider  
 >destroying a vastly more intelligent and capable being than 
 >any human  
 >or even all humans as more heinous than destroying a human being or  
 >even all humans.   Although it is pretty meaningless to compare or  
 >grade such horrors as the destruction of humanity.   Does that make  
 >me monstrous somehow?  Can any answer to a grossly unlikely  
 >hypothetical like this really say anything important about 
 >the answerer?

I understand that you value intelligence and capability, but I can't see
my way to the destruction of humanity from there. 

The existence of superintelligence (a fact of the question), suggests
the universe permits the possibility that each of the billions of humans
could potentially become a superintelligence.

There is some unique point in the space of moral calculations where the
potential existence of billions of superintelligences outweighs the
current existence of one. Not knowing where this point lies, I have to
generate my best guess. 

Let's assume, knowing it's possible, that the path from human to
superintelligence is ridiculously hard, almost indistinguishable from
impossible. Maybe each human has .0000001 probability of transcension.
With 6 billion humans, that is 600 superintelligences that will
eventually come to exist. Ceteris paribus, that's 600 times the
intelligence and capability existing currently.

Even if we discount the value of these potential future
superintelligences by 5% per year of delay, it would take 132 years of
no human transcension to break even. (The present value of the existence
of 600 superintelligences in 132 years discounted at 5% is just greater
than 1). 

I think I'm using conservative figures and rates. If not, I'm open to
revising them.     

I can deduce another fact from the question to buttress this line of
reasoning. That I am required and empowered to make this choice in a
universe where at least one superintelligence exists indicates that the
existing superintelligence (the one affected by my choice) is not
capable enough to retain the power to make this choice itself. Though
it's not necessary to the calculation, this is qualitative evidence that
existent superintelligence is *less* important than it seems to me now.
I would have to adjust future values accordingly as well, of course. But
it affects my *present* state of mind. 

Keith        

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to