Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread James Ratcliff
The problem is only half fulfilled with the enourmous size of the corpus. A huge corpus will only give you plain predictive ability, reasoning and other abilities are not greatly approved.So a simple interaction.Person: My name is BosefusAI: okPerson: What is my name?AI: dunnoNow it is true that

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread James Ratcliff
I dont believe "yellow" or "hot" can either be learned verbally.JamesRussell Wallace [EMAIL PROTECTED] wrote: On 8/13/06, Matt Mahoney [EMAIL PROTECTED] wrote: There is no knowledge that you can demonstrate verbally that cannot also be learned verbally. An unusual claim... do you mean all

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread Matt Mahoney
If dumb models kill smart ones in text compression, then how do you know they are dumb? What is your objective test of "smart"? The fact is that in speech recognition research, language models with a lower perplexity also have lower word error rates.We have "smart" statistical parsers that are 60%

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread Mark Waser
Yellow is the state of reflecting light which is between two specific frequencies. Hot is the state of having a temperature above some set value. It takes examples to recognize/understand when your sensory apparatus is reporting one of these states but this is a calibration issue, not a

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread James Ratcliff
Now try that on my daughter or any other 3.5 year old. It doesnt work. :}No, as another poster stated better, there are many things that cant be explained verbally, and need to be "experienced" which is one of my big hangups about AGI in general. There is very little way to have an AI experience

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread Mark Waser
If dumb models kill smart ones in text compression, then how do you know they are dumb? They are dumb because they are inflexible and always use the same very simple rules. Fortunately, those "dumb" rules are good enough. What is your objective test of "smart"? Mydefinition of smart

Re: **SPAM** Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread Mark Waser
Now try that on my daughter or any other 3.5 year old. It doesnt work. :} Try what? Your daughter has calibrated her vision and stuck labels on the gauge. What has she learned? That this range reported by *her* personal vision systemis labeled yellow. Now, you want to do this without any