Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html

 
The premise is that if humans can create agents with above human intelligence, 
then so can they. What I am questioning is whether agents at any intelligence 
level can do this. I don't believe that agents at any level can recognize 
higher intelligence, and therefore cannot test their creations. We rely on 
competition in an external environment to make fitness decisions. The parent 
isn't intelligent enough to make the correct choice.

-- Matt Mahoney, [EMAIL PROTECTED]



----- Original Message ----
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

Matt:If RSI is possible, then there is the additional threat of a fast 
takeoff of the kind described by Good and Vinge

Can we have an example of just one or two subject areas or domains where a 
takeoff has been considered (by anyone)  as possibly occurring, and what 
form such a takeoff might take? I hope the discussion of RSI is not entirely 
one of airy generalities, without any grounding in reality. 


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to