Hi Pei,

I think your point is correct that the notion of "embodiment" presented by
Brooks and some other roboticists is naive.  I'm not sure whether their
actual conceptions are naive, or whether they just aren't presenting their
foundational philosophical ideas clearly in their writings (being ultimately
more engineering-oriented people, and probably not that accustomed to the
philosophical style of discourse in which these sorts of definitional
distinctions need to be more precisely drawn).  I do think (in approximate
concurrence with your paper) that ANY control system physically embodied in
a physical system S, that has an input and output stream, and whose input
and output stream possess correlation with the physical state of S, should
be considered as "psychologically embodied."  Clearly, whether it's a robot
or a laptop (w/o network connection if you like), such a system has the
basic property of embodiment.  Furthermore S doesn't need to be a physical
system ... it could be a virtual system inside some "virtual world" (and
then there's the question of what properties characterize a valid "virtual
world" ... but let's leave that for another email thread...)

However, I think that not all psychologically-embodied systems possess a
sufficiently rich psychological-embodiment to lead to significantly general
intelligence....  My suggestion is that a laptop w/o network connection or
odd sensor-peripherals, probably does not have sufficiently rich
correlations btw its I/O stream and its physical state, to allow it to
develop a robust self-model of its physical self (which can then be used as
a basis for a more general phenomenal self).

I think that Varela and crew understood the value of this rich network of
correlations, but mistakenly assumed it to be a unique property of
biological systems...

I realize that the points you made in your paper do not contradict the
suggestions I've made in this email.  I don't think anything significant in
your paper is wrong, actually.  It just seems to me not to address the most
interesting aspects of the embodiment issue as related to AGI.

-- Ben G

On Thu, Sep 4, 2008 at 7:06 AM, Pei Wang <[EMAIL PROTECTED]> wrote:

> On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >> Sure it is. Systems with different sensory channels will never "fully
> >> understand" each other. I'm not saying that one channel (verbal) can
> >> replace another (visual), but that both of them (and many others) can
> >> give symbol/representation/concept/pattern/whatever-you-call-it
> >> meaning. No on is more "real" than others.
> >
> > True, but some channels may -- due to the statistical properties of the
> data
> > coming across them -- be more conducive to the development of AGI than
> > others...
>
> I haven't seen any evidence for that. For human intelligence, maybe,
> but for intelligence in general, I doubt it.
>
> > I think the set of relations among words (considered in isolation,
> without
> > their referents) is "less rich" than the set of relations among
> perceptions
> > of a complex world, and far less rich than the set of relations among
> > {perceptions of a complex world, plus words referring to these
> > perceptions}....
>
> Not necessarily. Actually some people may even make the opposite
> argument: relations among non-linguistic components in experience are
> basically temporal or spatial, while the relations among words and
> concepts have much more types. I won't go that far, but I guess in
> some sense all channels may have the same (potential) richness.
>
> > And I think that this lesser richness makes sequences of words a much
> worse
> > input stream for a developing AGI
> >
> > I realize that quantifying "less rich" in the above is a significant
> > challenge, but I'm presenting my intuition anyway...
>
> If your condition is true, then your conclusion follows, but the
> problem is in that "IF".
>
> > Also, relatedly and just as critically, the set of perceptions regarding
> the
> > body and its interactions with the environment, are well-structured to
> give
> > the mind a sense of its own self.
>
> We can say the same for every input/out operation set of an
> intelligent system. "SELF" is defined by what the system can feel and
> do.
>
> > This primitive infantile sense of
> > body-self gives rise to the more sophisticated phenomenal self of the
> child
> > and adult mind, which gives rise to reflective consciousness, the feeling
> of
> > will, and other characteristic structures of humanlike general
> > intelligence.
>
> Agree.
>
> > A stream of words doesn't seem to give an AI the same kind of
> > opportunity for self-development....
>
> If the system just sits there and passively accept whatever words come
> into it, what you said is true. If the incoming "words" is causally
> related to its outgoing "words", will you still say that?
>
> > I agree with your point, but I wonder if it's partially a "straw man"
> > argument.
>
> If you read Brooks or Pfeifer, you'll see that most of their arguments
> are explicitly or implicitly based on the myth that only a robot "has
> a body", "have real sensor", "live in a real world", ...
>
> > The proponents of embodiment as a key  aspect of AGI don't of
> > course think that Cyc is disembodied in a maximally strong sense -- they
> > know it interacts with the world via physical means.  What they mean by
> > "embodied" is something different.
>
> Whether a system is "embodied" does not depends on hardware, but on
> semantics.
>
> > I don't have the details at my finger tips, but I know that Maturana,
> Varela
> > and Eleanor Rosch took some serious pains to carefully specify the sense
> in
> > which they feel "embodiment" is critical to intelligence, and to
> distinguish
> > their sense of embodiment from the trivial sense of "communicating via
> > physical signals."
>
> That is different. The "embodiment" school in CogSci doesn't focus on
> body (they know every human already has one), but on experience.
> However, they have their misconception about AI. As I mentioned,
> Barsalou and Lakoff both thought strong AI is unlikely because
> computer cannot have human experience --- I agree what they said
> except their narrow conception of intelligence (CogSci people tend to
> take "intelligence" as "human intelligence").
>
> > I suggest your paper should probably include a careful response to the
> > characterization of embodiment presented in
> >
> >
> http://www.amazon.com/Embodied-Mind-Cognitive-Science-Experience/dp/0262720213
> >
> > I note that I do not agree with the arguments of Varela, Rosch, Brooks,
> > etc.  I just think their characterization of embodiment is an interesting
> > and nontrivial one, and I'm not sure NARS with a text stream as input
> would
> > be embodied according to their definition...
>
> If I got the time (and motivation) to extend the paper into a journal
> paper, I'll double the length by discussing "embodiment in CogSci". In
> the current version, as a short conference paper, I'd rather focus on
> "embodiment in AI", and only attack the "robot myth".
>
> Pei
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to