--- On Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Your paper does **not** prove anything whatsoever about real-world
> situations.

You are correct. My RSI paper only applies to self improvement of closed 
systems. In the interest of proving the safety of AI, I think this is a good 
thing. It proves that various scenarios where an AI rewrites its source code or 
makes random changes and tests them, will not work without external input, even 
if computing power is unlimited. This removes one possible threat of a fast 
takeoff singularity.

Also, you are right that it does not apply to many real world problems. Here my 
objection (as stated in my AGI proposal, but perhaps not clearly) is that 
creating an artificial scientist with slightly above human intelligence won't 
launch a singularity either, but for a different reason. It is not the 
scientist who creates a smarter scientist, but it is the whole global economy 
that creates it. George Will expresses the idea better than I do in 
http://www.newsweek.com/id/158752 Nobody can make a pencil, much less an AI.

The global brain *is* self improving, both by learning and by reorganizing 
itself to be more efficient. Without input, the self organization would reach a 
maximum and stop. Growth requires input as well as increased computing power by 
adding people and computers.

As for using algorithmic complexity as a proxy for intelligence (an upper 
bound, actually), perhaps you can suggest an alternative. Algorithmic 
complexity is "how much we know". Less well-defined measures seem to break down 
into philosophical arguments over exactly what intelligence is.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to