Well the short gist of this guy's spiel is that Lenat is on the right track.  The key 
is to accumulate terabytes of stupid, temporally forward associations between elements.

A little background check reveals that this guy isn't a complete nutcase.  He's got 
some publications (but not many), and a real lab position.

However, his claims are a bit too grandiose and he smacks a bit of a snake oil 
salesman at the end when he's fielding the questions, especially the one about the 
inability of his theory to handle the tightly regimented sequence of commands 
necessary to execute motor programs.  He sidesteps that one in a particularly 
obfuscatory fashion.

Nor is his model very interesting in its application.  

Neuroscience data stands counter to his basic claim that the cortex is just a big 
sheet of associatiors, there are many genetically described connection patterns.  

His claim that we set up relatively immutable patterns early in life have only been 
shown to be true for the visual cortex as far as I know.

AI isn't a failure because everyone involved is an idiot and keeps missing the obvious 
point that this genius has stumbled upon.  

AI is a failure because AI is hard.  

I give it a C-.

It's long on words and full of idealistic grandeur, but short on substance when you 
really boil it down. 


-Brad










-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to