On 23 Oct 2006 at 9:39, Josh Treadwell wrote:
> This is a big problem. If China was a free nation, I wouldn't have any 
> qualms with it, but the first thing China will do with AGI is marginalize 
> human rights. Any nation who censors it's internet (violators are sent to 
> prisoner/slave camps) and sells organs of unwilling executed prisoners 
> (more are executed each year in china than the entire world combined) is 
> not a place I'd like AGI to be developed. I hope Hugo doesn't regret his 
> decision.

Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary
AGIs as soon as possible, and damn the consequences. This is
someone who gleefully predicts massively destructive wars between
'terrans' and 'cosmists', and expects humanity to be made extinct by
'artilects', and actually wants to /hasten the arrival of this/. While I'd
have to characterise this goal system as quite literally insane, the
decision to accept funding from totalitarian regiemes is actually a quite
rational consequence. His architecture (at least as of 'CAM-brain') is just
about as horribly emergent and uncontrollable/unpredictable as it is
possible to get. If you accept hard takeoff, and you're using an
architecture like that, then it doesn't make a jot of difference what petty
political goals your funders might have; they're as irrelevant as everyone
else's goals once the hard takeoff kicks in. Fortunately there's no short
term prospect of anything like that actually working, but given enough
zettaflops of nanotech-supplied compute power it might start to be a
serious problem. I'm guessing that his backers are looking for PR and/or
limited commercial spinoffs though.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to