>> but I'm not very convinced that the singularity *will* automatically happen. 
>> {IMHO I think the nature of intelligence implies it is not amenable to 
>> simple linear scaling - likely not even log-linear

    I share that guess/semi-informed opinion; however, while that means that I 
am less afraid of hard-takeoff horribleness, it inflates my fear of someone 
taking a Friendly AI and successfully dismantling and misusing the pieces (if 
not reconstructing a non-Friendly AGI in their own image) -- and then maybe 
winning in a hardware and numbers race.

        Mark

P.S.  You missed the time where Eliezer said at Ben's AGI conference that he 
would sneak out the door before warning others that the room was on fire    :-)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to