Steve Richfield wrote:
Richard,
Rather than getting lost in responding to the details of your posting,
my entire discussion hinged on one apparent fact that may be in dispute
here. With all of the challenges of the English language, I will try to
make a short statement of that apparent fact.
You can listen to a passage and completely understand what it says. You
can read the same passage and completely understand what it
says. However, when YOU (and not some imperfect computer program that
you might design and write) sit down and carefully identify parts of
speech, construct the potential diagrams for the sentence, go through
domain-specific disambiguation, etc., with full opportunity to go back
and correct any mistakes that you might have made; that as often as not,
sentences in that passage do NOT communicate what you got from them when
you first heard or read them. If YOU can't do this job, (I sure know
that I can't because I have tried), then how the heck are you going to
design a computer program to do this job?
Going a step further, you (as a simulated ultimate AGI program) would
start considering words to fill in any recognized and potentially
unrecognized (because the sentence still "hung together") gaps. If you
are at all open minded about this, you will see that there are LOTS of
potential gap-fillers for most sentences, so that no particular "filled
in" sentence would seem to be preferred, even with limitless
domain-specific knowledge.
I presume that you have a PhD and have been in this since the days of
Joe Weizenbaum, whose lectures I have attended at the very first AI
conference at Stanford. You can plan and design forever, but unless you
have done enough raw-data analysis, which seems to be REALLY lacking in
this field (no reflection on your particular efforts), then all of the
clever whiz-bang technology that you have developed during the last 40
years ain't worth spit.
If you could for a moment, please explain: How is it potentially
possible for a computer program to succeed in extracting detailed
meaning from gapped sentences, when this is apparently impossible to do
by hand?
My own unproven belief is that people use their input as constraints to
bound the range of reality, and NOT as precise statements of reality
itself as has apparently been the presumption in the "understanding"
efforts that I have looked at. Where ambiguities in a sentence might
erect redundant constraints, there is no foul, because ineffective
constraints change nothing.
I have looked into this area only deeply enough to process problem
statements, which have many powerful simplifying assumptions that do NOT
exist in explanations, directions, etc., that an AGI would be expected
to process.
Has anyone else done this sort of homework?
I will answer your question, but bear in mind that the answer is
difficult to compress into one of these list postings.
The approach that adopt (and have done since at least 1986) is to find a
way to do all cognitive processing by means of "multiple weak
constraints" (for more detail on what that means, see the twin volumes
called "Parallel Distributed Processing" by Rumelhart, McClelland and
the PDP Research Group). One of the most immediate consequences of
doing things this way is that extracting the pragmatics from sentences
(which is what you are talking about) is not at all difficult, in
principle. Just as you say in your comment above, the best way to
understand the full meaning of a sentence is to allow many constraints
to apply to its interpretation, not just low-level syntactic or
disambiguation techniques. When a string of words (or other units) is
being analyzed, it is effectively surrounded by a cluster of elements
that are all trying to "model" different aspects of the string, and some
of these model builders are attempting to interpret the sentence in
quite high-level terms (using, for example, world knowledge that is
independent of language). Each of these model builders, in isolation,
would not be able to analyze the sentence. Some of the model builders
will also be completely incorrect. But overall, they compete and assist
one another (the "weak constraint" idea) until a single consistent
cluster becomes stronger than the others. This process is "dynamic
relaxation", and it results in an understanding of the sentence.
There are some very old systems that operated in this style (have you
heard of "Hearsay"?), but the only reason that you will not see any
sentence understanding systems built by me that use this technique is
that getting one of them to work is tedious IF you build it by hand. It
would work (just as those older sytems did work), but getting high-level
understanding of a wide range of sentences requires your system to be
filled with vast amounts of world knowledge, all of it tuned precisely
to make the dynamic relaxation process converge properly. Now, because
I have no interest in spending my entire career bulding one of those by
hand, I am working on techniques for getting the system to build its own
consistent knowledge. Doing that in the right way depends on the right
architectural choices, and since there are a bazillion choices
available, and since it is not clear that a random, or 'plausible'
choice of architecture will work (see my paper on the "Complex Systems
Problem"), what I am actually doing is developing the methodology to
enable me to home in on the right choice of architecture (or, more
precisely, the right choice of architectural detail .... the overall
architecture is quite well specified).
If you want more information about how the process of dynamic relaxation
can work, see Hofstadter's work on (among other things) "parallel
terraced scans", which you can find in his book called Creative Concepts
and Fluid Analogies.
This sounds like it is exactly like what you call your own "unproven
belief" about how things really work in the human mind.
What you have to understand is that, while you have been laying down
some pretty strong criticisms of what "everyone" is doing wrong, some of
us understood those issues long ago, and some like myself have been
explictly and directly working on a solution for 20 years or more. You
may not be happy to hear that it has taken so long, but the majority of
those in the AI community do not much like this approach, so it has
received little attention. It is also a big problem to try to solve the
problem in a comprehensive way (a way that is more than just a
short-term fix to get grant money, which then turns out to be a dead end).
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com