Terren Suydam wrote:
Harry,
--- On Wed, 8/6/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> But it's a preaching to the choir argument: Is there anything more
> to the argument than the intuition that automatic manipulation
> cannot create understanding? I think it can, though I have yet to
> show it.
The burden is on you, or anyone pursuing purely logical approaches,
to show how you can cross the chasm from syntax to semantics - from
form to meaning. How does your intuition of "automatic understanding"
inform a design that does nothing but manipulate symbols? At what
point does your design cross the boundary from simply manipulating
data automatically to understanding it? To me, the real problem here
is projecting your own understanding onto a machine that appears to
be doing something intelligent.
I guess I'll settle for the pragmatic answer: When (if) I (we) get it
working and it produces useful real world results, I'll be happy,
without worrying specifically whether it "understands."
If your intuition is correct, than it's not a big leap to say that
today's chess programs comprehend chess. Do you agree?
Yes. Though in a much narrower sense than we do, since they have no
larger context of things like games, competition, war, etc.
> I totally agree with all but the last sentence. The Chinese Room
> does provide a simple but accurate analogy to what a computer does.
> As such, it's excellent for helping non-computer types understand
> this issue in AI/philosophy. But I know of no definition of
> "comprehension" that is impossible to create using a program or a
> Chinese Room -- of course, I don't know /any/ complete definition
> of "comprehension," and maybe when I do, it will have the feature
> you believe it has.
I think your problems here are due to lack of clarity about what it
means for some kind of agent to understand something. For starters,
understanding is done by something - it doesn't exist in a vacuum.
What is the nature of that something?
Certainly there is lack of clarity about understanding, at least on my
part. Some day we'll all look back and laugh at our misconceptions about
the topic.
I'm not at all sure that understanding much be active. It may be that a
text book on physics understands physics. But it doesn't do anything
with that understanding, which is how we're used to seeing understanding
expressed, so we don't think of it as understanding.
> Yes, I don't mean to dismiss philosophy. In some areas of AI, there
> is far more understanding within philosophy than within computer
> science. But there's also lots of angels dancing on pins, so it can
> take a lot of time to find it. In some ways it's like having a
> domain expert, always a good thing when writing a program.
Totally agree! But it is so valuable to have your beliefs
challenged, which is why we should not rely on others to do the heavy
lifting.
Very true. Which is why this list is great when it sticks to challenging
rather than insulting. (Which you've done perfectly, BTW.)
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com