From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote:
From: John G. Rose [EMAIL PROTECTED]
Subject: RE: Language modeling (was Re: [agi] draft for comment)
To: agi@v2.listbox.com
Date: Sunday, September 7, 2008, 9:15 AM
From: Matt
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote:
Compression in itself has the overriding goal of reducing
storage bits.
Not the way I use it. The goal is to predict what the environment will
do next. Lossless compression is a way
--- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote:
From: John G. Rose [EMAIL PROTECTED]
Subject: RE: Language modeling (was Re: [agi] draft for comment)
To: agi@v2.listbox.com
Date: Sunday, September 7, 2008, 9:15 AM
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Sat, 9/6
[EMAIL PROTECTED]
Subject: Re: AI isn't cheap (was Re: Real vs. simulated environments (was
Re: [agi] draft for comment.. P.S.))
To: agi@v2.listbox.com
Date: Saturday, September 6, 2008, 2:58 PM
Matt,
I heartily disagree with your view as expressed here, and as stated to my
by heads of CS
Pei:As I said before, you give symbol a very narrow meaning, and insist
that it is the only way to use it. In the current discussion,
symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'.
BTW, what images you associate with the latter two?
Since you prefer to use person as example,
Mike,
If you think your AGI know-how is superior to the know-how of those
who already built testable thinking machines then why don't you try to
build one yourself? Maybe you would learn more that way than when
spending significant amount of time trying to sort out great
incompatibilities between
. Intelligence is multi.
John
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Friday, September 05, 2008 6:39 PM
To: agi@v2.listbox.com
Subject: Re: Language modeling (was Re: [agi] draft for comment)
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
Like
Matt,
I heartily disagree with your view as expressed here, and as stated to my by
heads of CS departments and other high ranking CS PhDs, nearly (but not
quite) all of whom have lost the fire in the belly that we all once had
for CS/AGI.
I DO agree that CS is like every other technological
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
Thanks for taking the time to explain your ideas in detail.
As I said,
our different opinions on how to do AI come from our very
different
understanding of intelligence. I don't take
passing Turing Test as
my research goal (as explained
--- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote:
Compression in itself has the overriding goal of reducing
storage bits.
Not the way I use it. The goal is to predict what the environment will do next.
Lossless compression is a way of measuring how well we are doing.
-- Matt Mahoney,
I won't argue against your preference test here, since this is a
big topic, and I've already made my position clear in the papers I
mentioned.
As for compression, yes every intelligent system needs to 'compress'
its experience in the sense of keeping the essence but using less
space. However, it
: Steve Richfield [EMAIL PROTECTED]
Subject: Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re:
[agi] draft for comment.. P.S.))
To: agi@v2.listbox.com
Date: Saturday, September 6, 2008, 2:58 PM
Matt,
I heartily disagree with your view as expressed here, and as stated to my by
heads
--- On Sat, 9/6/08, Pei Wang [EMAIL PROTECTED] wrote:
As for compression, yes every intelligent
system needs to 'compress'
its experience in the sense of keeping the essence
but using less
space. However, it is clearly not loseless. It is
even not what we
usually call loosy compression,
--- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote:
I guess you still see NARS as using model-theoretic
semantics, so you
call it symbolic and contrast it with system
with sensors. This is
not correct --- see
http://nars.wang.googlepages.com/wang.semantics.pdf and
On Fri, Sep 5, 2008 at 11:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote:
I guess you still see NARS as using model-theoretic
semantics, so you
call it symbolic and contrast it with system
with sensors. This is
not correct --- see
Matt,
FINALLY, someone here is saying some of the same things that I have been
saying. With general agreement with your posting, I will make some
comments...
On 9/4/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote:
Ppl like Ben argue that
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
NARS indeed can learn semantics before syntax --- see
http://nars.wang.googlepages.com/wang.roadmap.pdf
Yes, I see this corrects many of the problems with Cyc and with traditional
language models. I didn't see a description of a mechanism
On Fri, Sep 5, 2008 at 6:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
NARS indeed can learn semantics before syntax --- see
http://nars.wang.googlepages.com/wang.roadmap.pdf
Yes, I see this corrects many of the problems with Cyc and with
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
Like to many existing AI works, my disagreement with you is
not that
much on the solution you proposed (I can see the value),
but on the
problem you specified as the goal of AI. For example, I
have no doubt
about the theoretical and
--- On Fri, 9/5/08, Steve Richfield [EMAIL PROTECTED] wrote:
I think that a billion or so, divided up into small pieces to fund EVERY
disparate approach to see where the low hanging fruit is, would go a
LONG way in guiding subsequent billions. I doubt that it would take a
trillion to succeed.
Matt,
Thanks for taking the time to explain your ideas in detail. As I said,
our different opinions on how to do AI come from our very different
understanding of intelligence. I don't take passing Turing Test as
my research goal (as explained in
Hi,
What I think is that the set of patterns in perceptual and motoric data
has
radically different statistical properties than the set of patterns in
linguistic and mathematical data ... and that the properties of the set
of
patterns in perceptual and motoric data is intrinsically
Also, relatedly and just as critically, the set of perceptions regarding
the body and its interactions with the environment, are well-structured to
give the mind a sense of its own self. This primitive infantile sense of
body-self gives rise to the more sophisticated phenomenal self of the
That's if you aim at getting an AGI that is intelligent in the real world. I
think some people on this list (incl Ben perhaps) might argue that for now -
for safety purposes but also due to costs - it might be better to build an
AGI that is intelligent in a simulated environment.
Ppl like Ben
On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Sure it is. Systems with different sensory channels will never fully
understand each other. I'm not saying that one channel (verbal) can
replace another (visual), but that both of them (and many others) can
give
On Thu, Sep 4, 2008 at 2:12 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Also, relatedly and just as critically, the set of perceptions regarding
the body and its interactions with the environment, are well-structured to
give the mind a sense of its own self. This primitive infantile sense of
I agree with Pei in that a robot's experience is not necessarily more real
than that of a, say, web-embedded agent - if anything it is closer to the *
human* experience of the world. But who knows how limited our own sensory
experience is anyhow. Perhaps a better intelligence would comprehend the
Obviously you didn't consider the potential a laptop has with its
network connection, which in theory can give it all kinds of
perception by connecting it to some input/output device.
yes, that's true ... I was considering the laptop w/ only a power cable as
the AI system in question. Of
Hi Pei,
I think your point is correct that the notion of embodiment presented by
Brooks and some other roboticists is naive. I'm not sure whether their
actual conceptions are naive, or whether they just aren't presenting their
foundational philosophical ideas clearly in their writings (being
However, could you guys be more specific regarding the statistical
differences of different types of data? What kind of differences are you
talking about specifically (mathematically)? And what about the differences
at the various levels of the dual-hierarchy? Has any of your work or
On 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
However, could you guys be more specific regarding the statistical
differences of different types of data? What kind of differences are you
talking about specifically (mathematically)? And what about the differences
at the various levels of
So in short you are saying that the main difference between I/O data by
a motor embodyed system (such as robot or human) and a laptop is the ability
to interact with the data: make changes in its environment to systematically
change the input?
Not quite ... but, to interact w/ the data in a
Hi Ben,
You may have stated this explicitly in the past, but I just want to clarify -
you seem to be suggesting that a phenomenological self is important if not
critical to the actualization of general intelligence. Is this your belief, and
if so, can you provide a brief justification of
--- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote:
Ppl like Ben argue that the concept/engineering aspect of intelligence is
independent of the type of environment. That is, given you understand how
to make it in a virtual environment you can then tarnspose that concept
into a real
--- On Wed, 9/3/08, Pei Wang [EMAIL PROTECTED] wrote:
TITLE: Embodiment: Who does not have a body?
AUTHOR: Pei Wang
ABSTRACT: In the context of AI, ``embodiment''
should not be
interpreted as ``giving the system a body'', but as
``adapting to the
system's experience''. Therefore, being
On Thu, Sep 4, 2008 at 8:56 AM, Valentina Poletti [EMAIL PROTECTED] wrote:
I agree with Pei in that a robot's experience is not necessarily more real
than that of a, say, web-embedded agent - if anything it is closer to the
human experience of the world. But who knows how limited our own
On Thu, Sep 4, 2008 at 9:35 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I understand that a keyboard and touchpad do provide proprioceptive input,
but I think it's too feeble, and too insensitively respondent to changes in
the environment and the relation btw the laptop and the environment, to
On Thursday 04 September 2008, Matt Mahoney wrote:
Another aspect of embodiment (as the term is commonly used), is the
false appearance of intelligence. We associate intelligence with
humans, given that there are no other examples. So giving an AI a
face or a robotic body modeled after a human
On Thu, Sep 4, 2008 at 10:04 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi Pei,
I think your point is correct that the notion of embodiment presented by
Brooks and some other roboticists is naive. I'm not sure whether their
actual conceptions are naive, or whether they just aren't presenting
On Thu, Sep 4, 2008 at 2:22 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
The paper seems to argue that embodiment applies to any system with inputs
and outputs, and therefore all AI systems are embodied.
No. It argues that since every system has inputs and outputs,
'embodiment', as a non-trivial
TITLE: Embodiment: Who does not have a body?
AUTHOR: Pei Wang
ABSTRACT: In the context of AI, ``embodiment'' should not be
interpreted as ``giving the system a body'', but as ``adapting to the
system's experience''. Therefore, being a robot is neither a
sufficient condition nor a necessary
Pei:it is important to understand
that both linguistic experience and non-linguistic experience are both
special
cases of experience, and the latter is not more real than the former. In
the previous
discussions, many people implicitly suppose that linguistic experience is
nothing but
Pei,
I have a different sort of reason for thinking embodiment is important ...
it's a deeper reason that I think underlies the embodiment is important
because of symbol grounding argument.
Linguistic data, mathematical data, visual data, motoric data etc. are all
just bits ... and intelligence
Mike,
As I said before, you give symbol a very narrow meaning, and insist
that it is the only way to use it. In the current discussion,
symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'.
BTW, what images you associate with the latter two?
Since you prefer to use person as
I think I have an appropriate term for what I was trying to conceptualise.
It is that intelligence has not only to be embodied, but it has to be
EMBEDDED in the real world - that's the only way it can test whether
information about the world and real objects is really true. If you want to
On Wed, Sep 3, 2008 at 6:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
What I think is that the set of patterns in perceptual and motoric data has
radically different statistical properties than the set of patterns in
linguistic and mathematical data ... and that the properties of the set of
46 matches
Mail list logo