I got the following quote from Wikipedia under "understanding".

"According to the independent socionics researcher Rostislav Persion:

In order to test one's understanding it is necessary to present a question
that forces the individual to demonstrate the possession of a model, derived
from observable examples of that model's production or potential production
(in the case that such a model did not exist before hand). Rote memorization
can present an illusion of understanding, however when other questions are
presented with modified attributes within the query, the individual cannot
create a solution due to a lack of a deeper representation of reality."

There are many levels of "understanding" but I think it is wrong to believe
that understanding can be only in the form found in human beings.  If a
person believed otherwise, then "de facto", no computer program could ever
have any understanding.  Given that assumption, the above quote states that
a model of something conveys understanding.  The more complex the model, the
better the understanding.  He specifically rules out rote memorization as a
type of understanding.  I agree.  To my mind, this means that rule based
systems (no matter the number of rules) can never be considered to
understand anything and I think the Chinese room experiment talks to this
point.  Models aren't just a type of memorizing and are not just a bunch of
symbols that are defined by other symbols even though at the micro level,
computers definitely just manipulate symbols.

If the models are based on the real world grounding of the person who
programs a model, doesn't this mean that grounding can occur if 1) a model
is used instead of just rules or examples and 2) if the model includes
diagrams and enough variables so that the model can be explored (maybe in
ways not thought of by the programmer)?  No computer program can ever be
expected to experience the world through human eyes (unless the model has
been uploaded from a human) but does that negate the possibly of
understanding by a non-human entity?  If humans can accurately program
models that relate directly to the real world and reality, then why couldn't
an AI use this model to manipulate things in the real world.  The grounding
doesn't have to be created by the AGI UNLESS the model is created or emerges
from the AGI itself.

-- David Clark

> -----Original Message-----
> From: Terren Suydam [mailto:[EMAIL PROTECTED]
> Sent: August-06-08 8:24 AM
> To: [email protected]
> Subject: Re: [agi] Groundless reasoning --> Chinese Room
> 
> 
> Harry,
> 
> --- On Wed, 8/6/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> > I'll take a stab at both of these...
> >
> > The Chinese Room to me simply states that understanding
> > cannot be
> > decomposed into sub-understanding pieces. I don't see
> > it as addressing
> > grounding, unless you believe that understanding can only
> > come from the
> > outside world, and must become part of the system as atomic
> > pieces of
> > understanding. I don't see any reason to think that,
> > but proving it is
> > another matter -- proving negatives is always difficult.
> 
> The argument is only implicitly about the nature of understanding. It
> is explicit about the agent of understanding. It says that something
> that moves symbols around according to predetermined rules - if that's
> all it's doing - has no understanding. Implicitly, the assumption is
> that understanding must be grounded in experience, and a computer
> cannot be said to be experiencing anything.
> 
> It really helps here to understand what a computer is doing when it
> executes code, and the Chinese Room is an analogy to that which makes a
> computer's operation expressible in terms of human experience -
> specifically, the experience of incomprehensible symbols like Chinese
> ideograms. All a computer really does is apply rules determined in
> advance to manipulate patterns of 1's and 0's. No comprehension is
> necessary, and invoking that at any time is a mistake.
> 
> Fortunately, that does not rule out embodied AI designs in which the
> agent is simulated. The processor still has no understanding - it just
> facilitates the simulation.
> 
> > As to philosophy, I tend to think of it's relationship
> > to AI as somewhat
> > the same as alchemy's relationship to chemistry. That
> > is, it's one of
> > the origins of the field, and has some valid ideas, but it
> > lacks the
> > hard science and engineering needed to get things actually
> > working. This
> > is admittedly perhaps a naive view, and reflects the
> > traditional
> > engineering distrust of the humanities. I state it not to
> > be critical of
> > philosophy, but to give you an idea how some of us think of
> > the area.
> 
> As an engineer who builds things everyday (in software), I can
> appreciate the *limits* of philosophy. Spending too much time in that
> domain can lead to all sorts of excesses of thought, castles in the
> sky, etc. However, any good engineer will tell you how important theory
> is in the sense of creating and validating design. And while the theory
> behind rocket science involves physics, chemistry, and fluid dynamics
> (and others no doubt), the theory of AI involves information theory,
> computer science, and philosophy of mind & knowledge, like it or not.
> If you want to be a good AI engineer, you better be comfortable with
> all of the above.
> 
> Terren
> 
> > Terren Suydam wrote:
> > > Abram,
> > >
> > > If that's your response then we don't actually
> > agree.
> > >
> > > I agree that the Chinese Room does not disprove strong
> > AI, but I think it is a valid critique for purely logical or
> > non-grounded approaches. Why do you think the critique fails
> > on that level?  Anyone else who rejects the Chinese Room
> > care to explain why?
> > >
> > > (I know this has been discussed ad nauseum, but that
> > should only make it easier to point to references that
> > clearly demolish the arguments. It should be noted however
> > that relatively recent advances regarding complexity and
> > emergence aren't quite as well hashed out with respect
> > to the Chinese Room. In the document you linked to, mention
> > of emergence didn't come until a 2002 reference
> > attributed to Kurzweil.)
> > >
> > > If you can't explain your dismissal of the Chinese
> > Room, it only reinforces my earlier point that some of you
> > who are working on AI aren't doing your homework with
> > the philosophy. It's ok to reject the Chinese Room, so
> > long as you have arguments to do it (and if you do, I'm
> > all ears!) But if you don't think the philosophy is
> > important, then you're more than likely doing Cargo Cult
> > AI.
> > >
> > > (http://en.wikipedia.org/wiki/Cargo_cult)
> > >
> > > Terren
> > >
> > > --- On Tue, 8/5/08, Abram Demski
> > <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > >> From: Abram Demski <[EMAIL PROTECTED]>
> > >> Subject: Re: [agi] Groundless reasoning -->
> > Chinese Room
> > >> To: [email protected]
> > >> Date: Tuesday, August 5, 2008, 9:49 PM
> > >> Terren,
> > >> I agree. Searle's responses are inadequate,
> > and the
> > >> whole thought
> > >> experiment fails to prove his point. I think it
> > also fails
> > >> to prove
> > >> your point, for the same reason.
> > >>
> > >> --Abram
> > >>
> > >>
> > >
> > >
> > >
> > >
> > >
> > >
> > > -------------------------------------------
> > > agi
> > > Archives:
> > https://www.listbox.com/member/archive/303/=now
> > > RSS Feed:
> > https://www.listbox.com/member/archive/rss/303/
> > > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > > Powered by Listbox: http://www.listbox.com
> > >
> > >
> >
> >
> > -------------------------------------------
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> 
> 
> 
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> f491a0
> Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to