Mark waser writes:
 
> P.S.  You missed the time where Eliezer said at Ben's 
> AGI conference that he would sneak out the door before 
> warning others that the room was on fire    :-)
 
You people making public progress toward AGI are very brave indeed!  I wonder 
if a time will come when the personal security of AGI researchers or 
conferences will be a real concern.  Stopping AGI could be a high priority for 
existential-risk wingnuts.
 
On a slightly related note, I notice that many (most?) AGI approaches do not 
include facilities for "recursive self-improvement" in the sense of giving the 
AGI access to its base source code and algorithms.  I wonder if that approach 
is inherently safer, as the path to explosive self-improvement becomes much 
more difficult and unlikely to happen without being noticed.
 
Personally I think that there is little danger that a properly-programmed 
GameBoy is going to suddenly recursively self-improve itself into a 
singularity-causing AGI, and the odds of any computer in the next 10 years at 
least being able to do so are only slightly higher.
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to