RE: Language modeling (was Re: [agi] draft for comment)

2008-09-08 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote: From: John G. Rose [EMAIL PROTECTED] Subject: RE: Language modeling (was Re: [agi] draft for comment) To: agi@v2.listbox.com Date: Sunday, September 7, 2008, 9:15 AM From: Matt

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote: From: John G. Rose [EMAIL PROTECTED] Subject: RE: Language modeling (was Re: [agi] draft for comment) To: agi@v2.listbox.com Date: Sunday, September 7, 2008, 9:15 AM From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat, 9/6

Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-07 Thread Steve Richfield
[EMAIL PROTECTED] Subject: Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)) To: agi@v2.listbox.com Date: Saturday, September 6, 2008, 2:58 PM Matt, I heartily disagree with your view as expressed here, and as stated to my by heads of CS

Re: [agi] draft for comment

2008-09-07 Thread Mike Tintner
Pei:As I said before, you give symbol a very narrow meaning, and insist that it is the only way to use it. In the current discussion, symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'. BTW, what images you associate with the latter two? Since you prefer to use person as example,

Re: [agi] draft for comment

2008-09-07 Thread Jiri Jelinek
Mike, If you think your AGI know-how is superior to the know-how of those who already built testable thinking machines then why don't you try to build one yourself? Maybe you would learn more that way than when spending significant amount of time trying to sort out great incompatibilities between

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread John G. Rose
. Intelligence is multi. John -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Friday, September 05, 2008 6:39 PM To: agi@v2.listbox.com Subject: Re: Language modeling (was Re: [agi] draft for comment) --- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Like

Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-06 Thread Steve Richfield
Matt, I heartily disagree with your view as expressed here, and as stated to my by heads of CS departments and other high ranking CS PhDs, nearly (but not quite) all of whom have lost the fire in the belly that we all once had for CS/AGI. I DO agree that CS is like every other technological

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Thanks for taking the time to explain your ideas in detail. As I said, our different opinions on how to do AI come from our very different understanding of intelligence. I don't take passing Turing Test as my research goal (as explained

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way of measuring how well we are doing. -- Matt Mahoney,

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Pei Wang
I won't argue against your preference test here, since this is a big topic, and I've already made my position clear in the papers I mentioned. As for compression, yes every intelligent system needs to 'compress' its experience in the sense of keeping the essence but using less space. However, it

Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-06 Thread Matt Mahoney
: Steve Richfield [EMAIL PROTECTED] Subject: Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)) To: agi@v2.listbox.com Date: Saturday, September 6, 2008, 2:58 PM Matt,   I heartily disagree with your view as expressed here, and as stated to my by heads

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Sat, 9/6/08, Pei Wang [EMAIL PROTECTED] wrote: As for compression, yes every intelligent system needs to 'compress' its experience in the sense of keeping the essence but using less space. However, it is clearly not loseless. It is even not what we usually call loosy compression,

Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote: I guess you still see NARS as using model-theoretic semantics, so you call it symbolic and contrast it with system with sensors. This is not correct --- see http://nars.wang.googlepages.com/wang.semantics.pdf and

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
On Fri, Sep 5, 2008 at 11:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote: I guess you still see NARS as using model-theoretic semantics, so you call it symbolic and contrast it with system with sensors. This is not correct --- see

Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)

2008-09-05 Thread Steve Richfield
Matt, FINALLY, someone here is saying some of the same things that I have been saying. With general agreement with your posting, I will make some comments... On 9/4/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote: Ppl like Ben argue that

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: NARS indeed can learn semantics before syntax --- see http://nars.wang.googlepages.com/wang.roadmap.pdf Yes, I see this corrects many of the problems with Cyc and with traditional language models. I didn't see a description of a mechanism

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
On Fri, Sep 5, 2008 at 6:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: NARS indeed can learn semantics before syntax --- see http://nars.wang.googlepages.com/wang.roadmap.pdf Yes, I see this corrects many of the problems with Cyc and with

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Like to many existing AI works, my disagreement with you is not that much on the solution you proposed (I can see the value), but on the problem you specified as the goal of AI. For example, I have no doubt about the theoretical and

AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Steve Richfield [EMAIL PROTECTED] wrote: I think that a billion or so, divided up into small pieces to fund EVERY disparate approach to see where the low hanging fruit is, would go a LONG way in guiding subsequent billions. I doubt that it would take a trillion to succeed.

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
Matt, Thanks for taking the time to explain your ideas in detail. As I said, our different opinions on how to do AI come from our very different understanding of intelligence. I don't take passing Turing Test as my research goal (as explained in

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Hi, What I think is that the set of patterns in perceptual and motoric data has radically different statistical properties than the set of patterns in linguistic and mathematical data ... and that the properties of the set of patterns in perceptual and motoric data is intrinsically

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Also, relatedly and just as critically, the set of perceptions regarding the body and its interactions with the environment, are well-structured to give the mind a sense of its own self. This primitive infantile sense of body-self gives rise to the more sophisticated phenomenal self of the

Re: [agi] draft for comment.. P.S.

2008-09-04 Thread Valentina Poletti
That's if you aim at getting an AGI that is intelligent in the real world. I think some people on this list (incl Ben perhaps) might argue that for now - for safety purposes but also due to costs - it might be better to build an AGI that is intelligent in a simulated environment. Ppl like Ben

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Sure it is. Systems with different sensory channels will never fully understand each other. I'm not saying that one channel (verbal) can replace another (visual), but that both of them (and many others) can give

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:12 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Also, relatedly and just as critically, the set of perceptions regarding the body and its interactions with the environment, are well-structured to give the mind a sense of its own self. This primitive infantile sense of

Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
I agree with Pei in that a robot's experience is not necessarily more real than that of a, say, web-embedded agent - if anything it is closer to the * human* experience of the world. But who knows how limited our own sensory experience is anyhow. Perhaps a better intelligence would comprehend the

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Obviously you didn't consider the potential a laptop has with its network connection, which in theory can give it all kinds of perception by connecting it to some input/output device. yes, that's true ... I was considering the laptop w/ only a power cable as the AI system in question. Of

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Hi Pei, I think your point is correct that the notion of embodiment presented by Brooks and some other roboticists is naive. I'm not sure whether their actual conceptions are naive, or whether they just aren't presenting their foundational philosophical ideas clearly in their writings (being

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
However, could you guys be more specific regarding the statistical differences of different types of data? What kind of differences are you talking about specifically (mathematically)? And what about the differences at the various levels of the dual-hierarchy? Has any of your work or

Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
On 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote: However, could you guys be more specific regarding the statistical differences of different types of data? What kind of differences are you talking about specifically (mathematically)? And what about the differences at the various levels of

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
So in short you are saying that the main difference between I/O data by a motor embodyed system (such as robot or human) and a laptop is the ability to interact with the data: make changes in its environment to systematically change the input? Not quite ... but, to interact w/ the data in a

Re: [agi] draft for comment

2008-09-04 Thread Terren Suydam
Hi Ben, You may have stated this explicitly in the past, but I just want to clarify - you seem to be suggesting that a phenomenological self is important if not critical to the actualization of general intelligence. Is this your belief, and if so, can you provide a brief justification of

Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote: Ppl like Ben argue that the concept/engineering aspect of intelligence is independent of the type of environment. That is, given you understand how to make it in a virtual environment you can then tarnspose that concept into a real

Re: [agi] draft for comment

2008-09-04 Thread Matt Mahoney
--- On Wed, 9/3/08, Pei Wang [EMAIL PROTECTED] wrote: TITLE: Embodiment: Who does not have a body? AUTHOR: Pei Wang ABSTRACT: In the context of AI, ``embodiment'' should not be interpreted as ``giving the system a body'', but as ``adapting to the system's experience''. Therefore, being

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 8:56 AM, Valentina Poletti [EMAIL PROTECTED] wrote: I agree with Pei in that a robot's experience is not necessarily more real than that of a, say, web-embedded agent - if anything it is closer to the human experience of the world. But who knows how limited our own

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 9:35 AM, Ben Goertzel [EMAIL PROTECTED] wrote: I understand that a keyboard and touchpad do provide proprioceptive input, but I think it's too feeble, and too insensitively respondent to changes in the environment and the relation btw the laptop and the environment, to

Re: [agi] draft for comment

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: Another aspect of embodiment (as the term is commonly used), is the false appearance of intelligence. We associate intelligence with humans, given that there are no other examples. So giving an AI a face or a robotic body modeled after a human

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 10:04 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Hi Pei, I think your point is correct that the notion of embodiment presented by Brooks and some other roboticists is naive. I'm not sure whether their actual conceptions are naive, or whether they just aren't presenting

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:22 PM, Matt Mahoney [EMAIL PROTECTED] wrote: The paper seems to argue that embodiment applies to any system with inputs and outputs, and therefore all AI systems are embodied. No. It argues that since every system has inputs and outputs, 'embodiment', as a non-trivial

[agi] draft for comment

2008-09-03 Thread Pei Wang
TITLE: Embodiment: Who does not have a body? AUTHOR: Pei Wang ABSTRACT: In the context of AI, ``embodiment'' should not be interpreted as ``giving the system a body'', but as ``adapting to the system's experience''. Therefore, being a robot is neither a sufficient condition nor a necessary

Re: [agi] draft for comment

2008-09-03 Thread Mike Tintner
Pei:it is important to understand that both linguistic experience and non-linguistic experience are both special cases of experience, and the latter is not more real than the former. In the previous discussions, many people implicitly suppose that linguistic experience is nothing but

Re: [agi] draft for comment

2008-09-03 Thread Ben Goertzel
Pei, I have a different sort of reason for thinking embodiment is important ... it's a deeper reason that I think underlies the embodiment is important because of symbol grounding argument. Linguistic data, mathematical data, visual data, motoric data etc. are all just bits ... and intelligence

Re: [agi] draft for comment

2008-09-03 Thread Pei Wang
Mike, As I said before, you give symbol a very narrow meaning, and insist that it is the only way to use it. In the current discussion, symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'. BTW, what images you associate with the latter two? Since you prefer to use person as

Re: [agi] draft for comment.. P.S.

2008-09-03 Thread Mike Tintner
I think I have an appropriate term for what I was trying to conceptualise. It is that intelligence has not only to be embodied, but it has to be EMBEDDED in the real world - that's the only way it can test whether information about the world and real objects is really true. If you want to

Re: [agi] draft for comment

2008-09-03 Thread Pei Wang
On Wed, Sep 3, 2008 at 6:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote: What I think is that the set of patterns in perceptual and motoric data has radically different statistical properties than the set of patterns in linguistic and mathematical data ... and that the properties of the set of