I concur here, also it was quoted that earlier that an AGI couldnt be understood because humans cant understand the brain.
So when we become able to understand the brain, will this view be reversed?  Or is the thought that we will NEVER be able to understand the brain.  Because while I believe it to be a truly complex thing, I dont hold the other belief.
  And at many points in time, complete understanding is not necessary, but partial understanding is, and being able to learn about sections of something is important.  I dont understand all of the complexities of my toilet structure or know where exactly the pipes go.  But if I need to I can go find out, I can unstop it, and it I had to replace pipes I could start digging for the problem.

James

Mark Waser <[EMAIL PROTECTED]> wrote:
>> Models that are simple enough to debug are too simple to scale. 
>> The contents of a knowledge base for AGI will be beyond our ability to comprehend.
 
    Given sufficient time, anything should be able to be understood and debugged.  Size alone does not make something incomprehensible and I defy you to point at *anything* that is truly incomprehensible to a smart human (for any reason other than we lack knowledge on it).  I've seen all the analogies with pets not understanding and the beliefs that AIs are going to have minds "immeasurably greater than our own" and I submit that it's all just speculation on your part.  My contention is that there is a threshold and that we are above it and that beyond that, it's just a matter of speed and how much you can hold in working memory at a time.  I certainly don't buy the "mystical" approach that says that sufficiently large neural nets will come up with sufficiently complex discoveries that we can't understand them.  I contend that if you can't explain it to a very smart human (given sufficient time), then you don't understand it.
 
    Give me *one* counter-example to the above . . . .
 
----- Original Message -----
Sent: Monday, November 13, 2006 10:22 PM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis

James Ratcliff <[EMAIL PROTECTED]> wrote:
>Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see >appropriate in that case.
>
>It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that, using
>language internaly would seem natural, and be much easier to look inside the box ,and see what is going on and correct the
>system's behaviour.

No, no, no, that is why AI failed.  You can't look inside the box because it's 10^9 bits.  Models that are simple enough to debug are too simple to scale.  How many times will we repeat this mistake?  The contents of a knowledge base for AGI will be beyond our ability to comprehend.  Get over it.  It will require a different approach.

1. Develop a quantifiable criteria for success, a test score.
2. Develop a theory of learning.
3. Develop a training and test set (about 10^9 bits compressed).
4. Tune the learning model to improve the score.

Example:

1. Criteria: SAT analogy test score.
2. Theory: word associtation matrix reduced by singular value decomposition (SVD).
3. Data: 50M word corpus of news articles.
4. Results: http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-48255.pdf

An SVD factored word association matrix seems pretty opaque to me.  You can't point to which matrix elements represent associations like cat-dog, moon-star, etc, nor will you be inserting such knowledge for testing.  If you want to understand it, you have to look at the learning algorithm.  It turns out that there is an efficient neural model for SVD.  http://gen.gorrellville.com/gorrell06.pdf

It should not take decades to develop a knowledge base like Cyc.  Statistical approaches can do this in a matter of minutes or hours.
 
-- Matt Mahoney, [EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php


Access over 1 million songs - Yahoo! Music Unlimited.
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Reply via email to