--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>Hmmm.... I am pretty strongly skeptical of intelligence tests that do not 
>measure the actual functionality of an AI system, but rather measure the 
>theoretical capability of the structures or processes or data inside the 
>system...
>
>The only useful way I know how to define intelligence is **functionally**, in 
>terms of what a system can actually do ... 
>
>A 2 year old cannot get itself to pay attention to predicting language for 
>more than a few minutes, so in a functional sense, it is a much stupider 
>language predictor than gzip ... 

Intelligence is not a point on a line. A calculator could be more intelligent 
than any human, depending on what you want it to do.

Text compression measures the capability of a language model, which is an 
important, unsolved problem in AI. (Vision is another).

I'm not building AGI. (That is a $1 quadrillion problem). I'm studying 
algorithms for learning language. Text compression is a useful tool for 
measuring progress (although not for vision).

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to