Richard,

On 5/18/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Steve Richfield wrote:
>
> "I have a headache. I missed my morning coffee."
>>   From this, Dr. Eliza will see a present-tense headache, a present-tense
>> negated (presumed consumption) of coffee. A link definition would look for a
>> headache and no present-tense coffee, and past-tense coffee. What it
>> presently lacks is seeing that this also implies that the writer usually
>> drinks coffee in the morning, which is VERY important. Seeing no (apparent)
>> mention of usual coffee consumption, Dr. Eliza would then ask something like
>> "How much coffee do you usually drink on an average day?" in the hopes that
>> you would provide this (redundant) information to improve its computation of
>> probability.
>>  Note here that the apparent primary meaning of this sentence - the
>> linking of a headache to missing morning coffee, was properly DISCARDED
>> because there was nothing useful that could be done with this opinion of
>> causality on the part of the user.
>>  What I fail to see is how fully "understanding" the written/spoken word
>> is of any use to any computer program! What would you then do with that
>> understanding, since most of it will be beyond the ability of any computer
>> program to do anything useful and accurate with?
>>
>
> If you are saying that people have tried and failed to come up with good
> methods for extracting the meaning of sentences such as "I have a headache -
> I missed my morning coffee", then you would, of course be correct.


My REAL point was that around half of all sentences suffer from such
problems. If you are only interested in understanding the OTHER half and
accepting a ~50% error rate, then please proceed as you have been.

Note the Dr. Eliza has a high ~20% "error budget" where it will continue to
function usefully until its overall error rate gets to around 20%. This gets
divided up between speech recognition, spelling errors, grammatical errors,
design weaknesses, shallow parsing shortcomings, etc. Speech recognition
related problems are responsible for ~10%, and everything else adds up to
~10%, so it just barely works with spoken input. Putting in some common
speech mis-recognitions helped a LOT because it often pushes things back
over the 20% point. For example, under some circumstances it may ask if you
are taking "pregnenalone", which the speech recognition engine usually hears
back as "pregnant alone" (The Bayesian statistics were obviously gathered
over OTHER domains) when you answer the question.

But the whole point of doing research in Artificial General Intelligence
>  (as opposed to narrow-AI) is because we want to go beyond past failures and
> reach a point where we can indeed build systems that can fully understand
> sentences such as the coffee-headache one.


A good first step would seem to be to FULLY understand past failures, rather
than continuing to butt your head up against the same wall, but in a
slightly different way.

Some of us have explicitly made a priority of trying to understand how this
> kind of understanding happens,


As I have hopefully explained, I believe that the "understanding" that you
are seeking does NOT occur in humans, and very likely can NOT be made to
work in machines where the input is (technically) nonsense ~50% of the time.

and some even believe that they re[ally] making progress on the problem.


Only because they haven't looked ABOVE the module to see what is needed from
it. Sure, a few more decades of work may be able to fill in some of the
gapping and may even be able to extract >80% of the stated meaning, but
where is it that this "understanding" module starts automatically rejecting
~5% of its *perfectly* understood input the same as human listeners do? Any
AGI that perfectly accepts and understands its input would quickly devolve
into a psychotic mess.

In light of that, I cannot make any sense of your last paragraph, above.  If
> you mean this literally [!] then I am at a loss for words.


Perhaps we have mutually stumbled into the REAL problem! Your WORDS are not
what is needed here, but rather your DEED of figuring out just what is
needed from an "understanding" module to be useful to an AGI. What structure
of the "understanding" would lend itself to useful AGI functionality? I
don't believe that any such structure can exist, and am going to
considerable effort to "paradigm shift" my orthogonal understanding to
address you in language that you can process in your quite different
paradigm.

Perhaps you mean something else by it.


Pasting in my last paragraph in italics here, but expanded hopefully
enough to clarify my meaning, hopefully sufficiently to argue the facts...
**
*What I fail to see is how fully "understanding" the written/spoken word is
of any use to any computer program!*

In short, it appears (to me) that the AGI folks here are committing the
greatest sin of all in system design - performing bottom-up design. Of
course, even a good top-down design must sometimes stop and evaluate the
writability of a difficult low-level module by actually writing it (I have
certainly done this many times), but when this effort goes beyond weeks, it
is usually a good sign that the STRUCTURE that it fits into is wrong. Of
course we can't argue your structure here because the AGI folks here haven't
done their most basic homework of stating the high-level design that NEEDS
the full understanding that is being sought. Surely, if any one of you would
attempt to propose a design and maybe write just a little of the high-level
code that would need such a module, I believe that you would QUICKLY abandon
this effort as NOT being in a path to success in your stated goals.

Following is where I (attempt to) drill down into just WHY I believe that
fully "understanding" a sentence won't help much, in the hopes of saving you
the effort of designing the high-level code mentioned above. If I fail to
make my point below, then you would seem to have little other rational
choice than to STOP working on "understanding" for a week or two and throw
together some high-level trial design to see just what might be needed from
the understanding module.

*What would you then do with that understanding*

My point here is that models have structure, e.g. the figure 6 shape of
problematic cause-and-effect chains. Until you put the input into a useful
structure, like a compiler restates a program in digraphs, you can't even
start to do anything useful. However, random speech/writing comes from the
heads of people who do NOT understand such structure, and hence the writing
itself lacks such structure and often reflects an erroneous structure
reflecting a misunderstanding of the very structures of reality itself.
"Understanding" seems to be an effort to structure nonsense (e.g. typical
writing), somewhat like asking a compiler to make good code from
syntactically incorrect and intractable code.

*since most of it will be beyond the ability of any computer program to do
anything useful and accurate with?*
**
**
For a computer program to handle such input, it would not only have to fully
understand the domain (usually not possible except in VERY mature domains),
but also understand the many common erroneous points of view of people
writing in that domain. Dr. Eliza handles this by only dealing with what it
DOES know about and ignoring the rest (what else can any program do?!), and
by looking for pre-programmed snippets of common statements of ignorance. I
see no way of getting this information into an AGI by reading text, which
would be needed before it can process the input needed to be able to process
the input needed to be able to process the input needed to be able to
process the input needed to be able to process the input...


With luck we can wring things out at this level. With a little less luck, a
couple of weeks of attempted high-level design will lead you to these same
conclusions. With no luck at all, you will dismiss the need for high-level
design guiding the functionality of low-level modules and continue on your
present bottom-up path, and quite probably spend much of your life working
on this apparently impossible and useless module.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to