Argh!  Are you all making the mistake I think you are making? Searle is
using a technical term in philosophy--"intentionality".  It is different
from the common use of intending as in aiming to do something or intention
as a goal.  (Here's s wiki http://en.wikipedia.org/wiki/Intentionality).

Yes and no. Merely having goals does not require intentionality; however, in order to most effectively fulfill goals *does* (I believe) require intentionality. Without intentionality (which in the philosophical sense is basically the same as grounding), you can only fulfill your goals by accident -- OR -- by someone else's design/intentionality. A true AGI must have it's own intentionality (and groundedness to succeed in it's intentionality without someone else's intentionality taking over). Searle's Chinese Room does not have a goal much less groundedness or intentionality -- but I believe that we can program a machine so that it has all three (after all, as Searle says, aren't we all just biological machines?).

The Chinese room argument is pretty simple, and it doesn't really try to
do too much.  It's really just all about how you can manipulate symbols,
but you might not get any real meaning because the symbols aren't really
referring to anything.  Searle also says it's trivially true that machines
can possibly understand things because we do and we're machines.  It's
just formal systems that have this problem.

Just copying the above for everyone else again.  I agree completely.

       Mark


----- Original Message ----- From: <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Thursday, August 07, 2008 2:57 AM
Subject: Re: [agi] Groundless reasoning --> Chinese Room


Argh!  Are you all making the mistake I think you are making? Searle is
using a technical term in philosophy--"intentionality".  It is different
from the common use of intending as in aiming to do something or intention
as a goal.  (Here's s wiki http://en.wikipedia.org/wiki/Intentionality).
The sense that Seale is using is roughly how things like words (but it
could just be finger pointing) can refer to other things.  I see the wiki
uses the word "aboutness".

I have to admit I'm pretty influenced by Searle.  I've listened to his
lectures on philosophy of mind from the Teaching Company.  He actually
came to U of M and gave a lecture in the Star Wars Senate room where we
had AGI-08.  This was during the semester when the cognitive science
seminar there was about the symbol grounding problem.  I didn't go to the
seminar much, so I didn't see what they came up with.

The Chinese room argument is pretty simple, and it doesn't really try to
do too much.  It's really just all about how you can manipulate sysmbols,
but you might not get any real meaning because the symbols aren't really
referring to anything.  Searle also says it's trivially true that machines
can possibly understand things because we do and we're machines.  It's
just formal systems that have this problem.
andi

Mark Waser wrote:
 The critical point that most people miss -- and what is really
 important for this list (and why people shouldn't blindly dismiss
 Searle) is that it is *intentionality* that defines "understanding".
 If a system has goals/intentions and it's actions are modified by the
 external world (i.e. it is grounded), then, to the extent to which
 it's actions are *effectively* modified (as judged in relation to
 it's intentions) is the extent to which it "understands".  The most
 important feature of an AGI is that it has goals and that it modifies
 it's behavior (and learns) in order to reach them.  The Chinese Room
 is incapable of these behaviors since it has no desires.

Harry Chesley replied:
I think this is an excellent point, so long as you're careful to define
"intention" simply in terms of goals that the system is attempting to
satisfy/maximize, and not in terms of conscious desires. As you point
out, the former provides a context in which to define understanding and
to measure it. The latter leads off into further undefined terms and
concepts -- I mention this rather than just agreeing outright mainly
because of your use of the word "desire" in the last sentence, which
/could/ be interpreted anthropomorphically.




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to