--- On Tue, 10/14/08, Charles Hixson <[EMAIL PROTECTED]> wrote:

> It seems clear that without external inputs the amount of
> improvement 
> possible is stringently limited.  That is evident from
> inspection.  But 
> why the "without input"?  The only evident reason
> is to ensure the truth 
> of the proposition, as it doesn't match any intended
> real-world scenario 
> that I can imagine.  (I've never considered the
> "Oracle AI" scenario [an 
> AI kept within a black box that will answer all your
> questions without 
> inputs] to be plausible.)

If input is allowed, then we can't clearly distinguish between self improvement 
and learning. Clearly, learning is a legitimate form of improvement, but it is 
not *self* improvement.

What I am trying to debunk is the perceived risk of a fast takeoff singularity 
launched by the first AI to achieve superhuman intelligence. In this scenario, 
a scientist with an IQ of 180 produces an artificial scientist with an IQ of 
200, which produces an artificial scientist with an IQ of 250, and so on. I 
argue it can't happen because human level intelligence is the wrong threshold. 
There is currently a global brain (the world economy) with an IQ of around 
10^10, and approaching 10^12. THAT is the threshold we must cross. And that 
seed was already planted 3 billion years ago.

To argue this point, I need to discredit certain alternative proposals, such as 
intelligent agents making random variations of itself and then testing the 
children with puzzles of the parent's choosing. My paper proves that proposals 
of this form cannot work.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to