Eric Baum wrote: 

> James> Jef Allbright <[EMAIL PROTECTED]> wrote: Russell Wallace
> James> wrote:
>  
> >> Syntactic ambiguity isn't the problem. The reason computers don't 
> >> understand English is nothing to do with syntax, it's because they 
> >> don't understand the world.

<snip>

> >> But the computer still doesn't understand the sentence, because it 
> >> doesn't know what cats, mats and the act of sitting _are_. 
> (The best 
> >> test of such understanding is not language - it's having 
> the computer 
> >> draw an animation of the action.)
> 
> James> Russell, I agree, but it might be clearer if we point out that 
> James> humans don't understand the world either. We just 
> process these 
> James> symbols within a more encompassing context.
> 
> James, I would like to know what you mean by "understand".
> In my view, what humans do is the example we have of 
> understanding, the word should be defined so as to have a 
> reasonably precise meaning, and to include the observed phenomenon.
> 
> You apparently have something else in mind by understanding.

Eric, you may refer to me as "James" ;-), but as with the topic at hand,
it adds an unnecessary level of complexity and impedes understanding.

It is common to think of machines as not possessing the faculty of
understanding while humans do.  Similarly, machines not possessing
consciousness while humans do.  This way of thinking is adequately
effective for daily use, but it carries and propagates the implicit
assumption that "understanding" and "consciousness" are somehow
intrinsically distinct from other types of processing carried out by
physical systems.

It is simpler and more coherent to think in terms of a scale of
processing within increasingly complex context, such that one might say
that a vending machine understands the difference between certain coins,
an infant understands that a nipple is a source of goodness, and most
adults understand that cooperation is more productive than conflict.
Alternatively we can say that a vending machine responds effectively to
the insertion of proper coins, an infant responds effectively to the
presence of a nipple, and most adults respond effectively by choosing
cooperation over conflict.

But let's rather not say that a vending machine doesn't really
understand the difference between coins, an infant doesn't really
understand the whys and wherefores of nipples, but most adults really do
understand in all its significant implications why cooperation is more
productive than conflict.

Each of these examples is of a physical system responding with some
degree of effectiveness based on an internal model that represents with
some degree of fidelity its local environment.  Its an unnecessary
complication, and leads to endless discussions of qualia, consciousness,
free will and the like, to assume that at some magical unspecified point
there is a transition to "true understanding".

None of which is intended to deny that from a common-sense point of
view, humans understand things that machines don't.  But for computer
scientists working on AI, I think such conceptualizing is sloppy and
impedes effective discussion and progress.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to