PM,

At some risk of "dropping things into bins" that might not precisely fit
the bins...

On Fri, Mar 29, 2013 at 10:09 AM, Piaget Modeler
<[email protected]>wrote:

> Steve and Jim,
>
> Kindly respond...
>


> * Are their respective approaches sign-oriented
>   or agent-oriented?
>

For all but AGI (that can't work for decades with any presently known
approach because of a lack of processor power) and automatic language
translation (that has a large interest in preserving the speaker/writer's
frame of mind), there seems to be little real-world application for
agent-oriented approaches.

I have a specific application in mind - problem solving, along with an
important subset thereof, selling products. This appears to require a
sign-oriented approach, because most solvable problems appear for one of
two reasons:

1.  They lack some important piece of information. This information is NOT
present in their speech/writings, but common subtle signs often point them
out. People say the same things differently when they are in possession of
important knowledge/understanding, in that they often/usually frame their
problems in terms of the controlling phenomena, provided of course that
they know about the controlling phenomena. It is syntactically easy to pick
out common expressions of knowledge and/or ignorance of underlying
phenomena, despite their never directly addressing those phenomena. Even
the original Eliza did some of this in its own incredibly crude way.

2.  The person's point of view prohibits seeing the solution. Carefully
modeling their point of view doesn't help much. Usually these misdirections
can be recognizes by classes, e.g. seeing religious references mixed in
when describing a medical condition. These can offten be addressed by
asking unanswerable questions without shifting POV, e.g. "Do you think God
caused your illness, or do you think God doesn't know about your illness,
or do you think God chose not to intervene and cure it?"

So, my approach is fundamentally a sign-oriented approach, with the tweak
of looking for certain signs of actionable things going on in the heads of
agent/speakers.

>
> * What do they think about defining basic concepts
>   as types of the recognition and action procedures
>   of an agent?
>

Until my approach to parsing came along, purely bottom-up approaches (that
are good at recognizing variable word groupings as meaning particular
combinations of things) were way too slow to be interesting. Now, this may
be the fastest approach available.

I would think that an agent-oriented "top" could be attached to a
basic-concept-recognizing "bottom" to get the best of both worlds, if that
is what an application needs.

>
> * How about reusing these basic concepts as the
>   literal meanings of a language?
>

YES - that is the point I have been trying to make. It doesn't make any
difference whether something was intended as "placement" or "payload", so
handle them identically. The thing you lose in doing this is the
speaker/writer's defective point of view (POV) of his defective world
model. You can drag out important defects in his world model with
simplistic sign-oriented methods. However, while the information is there
in the words, it is MUCH more difficult to tease out the POV information,
largely because there seems to be no practical way to represent such things.

To illustrate, watch some old episodes of Cold Case Files, where they
question suspects looking for subtle indications that their world model
includes a particular crime, despite their attempting to hide this during
discussion. People have been convicted and executed for using particular
variants of words.

Remembering what an attorney-friend once said to me, "Don't ever be accused
of anything, because YOU will be found guilty" because my POVs of just
about everything are VERY different than other people's POVs, hence
ordinary conversation about almost anything will show variations from their
expected POVs.

There may indeed be some applications needing the defects in people's POVs,
e.g. analyzing the effectiveness of disinformation campaigns, or serious
attempts to render psychiatric help. However, I think that this would be
the flea biting the tail that is wagging the dog to use this requirement to
twist the analysis of NL for other ordinary applications.

In summary, there has been a ubiquitous presumption that "understanding"
speech/writing is the same task regardless of the application, and all that
would be needed to tailor a system to various applications is to separate
the various types of information, and allow each application to just access
whatever it might be interested in. This might turn out to be true in
another few decades, but for now, analysis must be tailored to the intended
application. This "tailoring" might turn out to be something simple, like a
list of check-boxes for what information to extract so that common code
could work for all applications. However, some information is EXPENSIVE to
extract, so be careful which boxes you check.

DrEliza.com had to be rewritten because it used a database, which made it
slow enough to be unable to scan large numbers of postings at a respectable
speed. My new method is the result of finding a better way. There seems to
have been nearly universal lack of recognition of the speed issues in
analyzing NL. The combinatorial explosion in extracting semantic
information is REAL.

>
> Happy Easter to you!
>

Here, we celebrate Good Friday!!!

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to