RE: Language modeling (was Re: [agi] draft for comment)

2008-09-08 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote: From: John G. Rose [EMAIL PROTECTED] Subject: RE: Language modeling (was Re: [agi] draft for comment) To: agi@v2.listbox.com Date: Sunday, September 7, 2008, 9:15 AM From: Matt

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote: From: John G. Rose [EMAIL PROTECTED] Subject: RE: Language modeling (was Re: [agi] draft for comment) To: agi@v2.listbox.com Date: Sunday, September 7, 2008, 9:15 AM From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat, 9/6

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread John G. Rose
. Intelligence is multi. John -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Friday, September 05, 2008 6:39 PM To: agi@v2.listbox.com Subject: Re: Language modeling (was Re: [agi] draft for comment) --- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Like

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Thanks for taking the time to explain your ideas in detail. As I said, our different opinions on how to do AI come from our very different understanding of intelligence. I don't take passing Turing Test as my research goal (as explained

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way of measuring how well we are doing. -- Matt Mahoney,

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Pei Wang
I won't argue against your preference test here, since this is a big topic, and I've already made my position clear in the papers I mentioned. As for compression, yes every intelligent system needs to 'compress' its experience in the sense of keeping the essence but using less space. However, it

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Sat, 9/6/08, Pei Wang [EMAIL PROTECTED] wrote: As for compression, yes every intelligent system needs to 'compress' its experience in the sense of keeping the essence but using less space. However, it is clearly not loseless. It is even not what we usually call loosy compression,

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
On Fri, Sep 5, 2008 at 11:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote: I guess you still see NARS as using model-theoretic semantics, so you call it symbolic and contrast it with system with sensors. This is not correct --- see

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: NARS indeed can learn semantics before syntax --- see http://nars.wang.googlepages.com/wang.roadmap.pdf Yes, I see this corrects many of the problems with Cyc and with traditional language models. I didn't see a description of a mechanism

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
On Fri, Sep 5, 2008 at 6:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: NARS indeed can learn semantics before syntax --- see http://nars.wang.googlepages.com/wang.roadmap.pdf Yes, I see this corrects many of the problems with Cyc and with

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Like to many existing AI works, my disagreement with you is not that much on the solution you proposed (I can see the value), but on the problem you specified as the goal of AI. For example, I have no doubt about the theoretical and

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
Matt, Thanks for taking the time to explain your ideas in detail. As I said, our different opinions on how to do AI come from our very different understanding of intelligence. I don't take passing Turing Test as my research goal (as explained in