I take it back, the field is still alive. Interesting.

http://xenia.media.mit.edu/~mueller/storyund/storyres.html

--Abram Demski

On Mon, Sep 29, 2008 at 9:51 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Mike,
>
> If your question is directed toward the general AI community (rather
> then the people on this list), the answer is a definite YES. It was
> some time ago, and as far as I know the line of research has been
> dropped, yet the results are to this day quite surprisingly good (I
> think). The following site has an example.
>
> http://www.it.uu.se/edu/course/homepage/ai/vt07/SCHANK.HTM
>
> The details of the story can vary fairly significantly and still the
> system performs as well as it does here (so long as it is still a
> story about traveling to get something to eat, written with the sorts
> of grammatical constructs you see in that story). Of course, this is a
> result of a fair amount of effort, programming "scripts" for everyday
> events. The approach was dropped because too much knowledge entry
> would be required to be practical for reading, say, a random newspaper
> story. But that is just what Cyc is for.
>
> Anyway, the point is, understanding passages is not a new field, just
> a neglected one.
>
> --Abram
>
> On Mon, Sep 29, 2008 at 3:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>> Ben and Stephen,
>>
>> AFAIK your focus - and the universal focus - in this debate on how and
>> whether language can be symbolically/logically interpreted - is on
>> *individual words and sentences.*  A natural place to start. But you can't
>> stop there - because the problems, I suggest, (hard as they already are),
>> only seriously begin when you try to interpret *passages* - series of
>> sentences from texts - and connect one sentence with another. Take:
>>
>> "John sat down in the carriage. His grim reflection stared at him through
>> the window. A whistle blew. The train started shuddering into motion, and
>> slowly gathered pace. He was putting Brighton behind him for good. And just
>> then the conductor popped his head through the door."
>>
>> I imagine you can pose the interpretative questions yourself. How do you
>> connect any one sentence with any other here? Where is the whistle blowing?
>> Where is the train moving? Inside the carriage or outside? Is the
>> carriage inside or outside or where in relation to the moving train?  Was he
>> putting Brighton *physically* behind him like a cushion? Did the conductor
>> break his head? etc. etc.
>>
>> The point is - in reading passages, in order to connect up sentences, you
>> have to do a massive amount of *reading between the lines* .  In doing that,
>> you have to reconstruct the world or parts of the world, being referred to,
>> from your brain's own models of that world.. (To understand the above
>> passage, for example, you employ a very complex model of train travel).
>>
>> And this will apply to all kinds of passages - to arguments as well as
>> stories.  (Try understanding Ben's argument below).
>>
>> How does Stephen or YKY or anyone else propose to "read between the lines"?
>> And what are the basic "world models", "scripts", "frames" etc etc. that you
>> think sufficient to apply in understanding any set of texts, even a
>> relatively specialised set?
>>
>> (Has anyone seriously *tried* understanding passages?)
>>
>>
>> Stephen,
>>
>> Yes, I think your spreading-activation approach makes sense and has plenty
>> of potential.
>>
>> Our approach in OpenCog is actually pretty similar, given that our
>> importance-updating dynamics can be viewed as a nonstandard sort of
>> spreading activation...
>>
>> I think this kind of approach can work, but I also think that getting it to
>> work generally and robustly -- not just in toy examples like the one I gave
>> -- is going to require a lot of experimentation and trickery.
>>
>> Of course, if the AI system has embodied experience, this provides extra
>> links for the spreading activation (or analogues) to flow along, thus
>> increasing the odds of meaningful results...
>>
>> Also, I think that spreading-activation type methods can only handle some
>> cases, and that for other cases one needs to use explicit inference to do
>> the disambiguation.
>>
>> My point for YKY was (as you know) not that this is an impossible problem
>> but that it's a fairly deep AI problem which is not provided out-of-the-box
>> in any existing NLP toolkit.  Solving disambiguation thoroughly is AGI-hard
>> ... solving it usefully is not ... but solving it usefully for
>> *prepositions* is cutting-edge research going beyond what existing NLP
>> frameworks do...
>>
>> -- Ben G
>>
>> On Mon, Sep 29, 2008 at 1:25 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>>>
>>> Ben gave the following examples that demonstrate the ambiguity of the
>>> preposition "with":
>>>
>>> People eat food with forks
>>>
>>> People eat food with friend[s]
>>>
>>> People eat food with ketchup
>>>
>>> The Texai bootstrap English dialog system, whose grammar rule engine I'm
>>> currently rewriting, uses elaboration and spreading activation to perform
>>> disambiguation and pruning of alternative interpretations.  Let's step
>>> through how Texai would process Ben's examples.  According to Wiktionary,
>>> "with" has among its word senses the following:
>>>
>>> as an instrument; by means of
>>>
>>> in the company of; alongside; along side of; close to; near to
>>>
>>> in addition to, as an accessory to
>>>
>>> Its clear when I make these substitutions which word sense is to be
>>> selected:
>>>
>>> People eat food by means of forks
>>>
>>> People eat food in the company of friends
>>>
>>> People eat ketchup as an accessory to food
>>>
>>> Elaboration of the Texai discourse context provides additional entailed
>>> propositions with respect to the objects actually referenced in the
>>> utterance.   The elaboration process is efficiently performed by spreading
>>> activation over the KB from the focal terms with respect to context.  The
>>> links explored by this process can be formed by offline deductive inference,
>>> or learned from heuristic search and reinforcement learning, or simply
>>> taught by a mentor.
>>>
>>> Relevant elaborations I would expect Texai to make for the example
>>> utterances are:
>>>
>>> a fork is an instrument
>>>
>>> there are activities that a person performs as a member of a group of
>>> friends; to eat is such an activity
>>>
>>> ketchup is a condiment; a condiment is an accessory with regard to food
>>>
>>> Texai considers all interpretations simultaneously, in a transient
>>> spreading activation network whose nodes are the semantic propositions
>>> contained within the elaborated discourse context and whose links are formed
>>> when propositions share an argument concept.  Negative links are formed
>>> between propositions from alternative interpretations.   At AGI-09 I hope to
>>> demonstrate this technique in which the correct word sense of "with" can be
>>> determined from the highest activated nodes in the elaborated discourse
>>> context after spreading activation has quiesced.
>>>
>>> -Steve
>>>
>>> Stephen L. Reed
>>> Artificial Intelligence Researcher
>>> http://texai.org/blog
>>> http://texai.org
>>> 3008 Oak Crest Ave.
>>> Austin, Texas, USA 78704
>>> 512.791.7860
>>>
>>> ----- Original Message ----
>>> From: Ben Goertzel <[EMAIL PROTECTED]>
>>> To: [email protected]
>>> Sent: Monday, September 29, 2008 8:18:30 AM
>>> Subject: Re: [agi] universal logical form for natural language
>>>
>>>
>>>
>>> On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin)
>>> <[EMAIL PROTECTED]> wrote:
>>>>
>>>> On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski <[EMAIL PROTECTED]>
>>>> wrote:
>>>> >
>>>> > How much will you focus on natural language? It sounds like you want
>>>> > that to be fairly minimal at first. My opinion is that chatbot-type
>>>> > programs are not such a bad place to start-- if only because it is
>>>> > good publicity.
>>>>
>>>> I plan to make use of Steven Reed's Texai -- he's writing a dialog
>>>> system that can translate NL to logical form.  If it turns out to be
>>>> unfeasible, I can borrow a simple NL interface from somewhere else.
>>>
>>>
>>> Whether using an NL interface like Stephen's is feasible or not, really
>>> depends on your expectations for it.
>>>
>>> Parsing English sentences into sets of formal-logic relationships is not
>>> extremely hard given current technology.
>>>
>>> But the only feasible way to do it, without making AGI breakthroughs
>>> first, is to accept that these formal-logic relationships will then embody
>>> significant ambiguity.
>>>
>>> Pasting some text from a PPT I've given...
>>>
>>> ***
>>> Syntax parsing, using the NM/OpenCog narrow-AI RelEx system, transforms
>>>
>>> Guard my treasure with your life
>>>
>>> into
>>>
>>> _poss(life,your)
>>> _poss(treasure,my)
>>> _obj(Guard,treasure)
>>> with(Guard,life)
>>> _imperative(Guard)
>>>
>>> Semantic normalization, using the RelEx rule engine and the FrameNet
>>> database, transforms this into
>>>
>>> Protection:Protection(Guard, you)
>>> Protection:Asset(Guard, treasure)
>>> Possession:Owner(treasure, me)
>>> Protection:Means(Guard, life)
>>> Possession:Owner(life,you)
>>> _imperative(Guard)
>>>
>>> But, we also get
>>>
>>> Guard my treasure with your sword.
>>>
>>> Protection:Protection(Guard, you)
>>> Protection:Asset(Guard, treasure)
>>> Possession:Owner(treasure, me)
>>> Protection:Means(Guard, sword)
>>> Possession:Owner(sword,you)
>>> _imperative(Guard)
>>>
>>> Guard my treasure with your uncle.
>>>
>>> Protection:Protection(Guard, you)
>>> Protection:Protection(Guard, uncle) Protection:Asset(Guard, treasure)
>>> Possession:Owner(treasure, me)
>>> Protection:Means(Guard, sword)
>>> Possession:Owner(uncle,you)
>>>
>>> *****
>>>
>>> The different senses of the word "with" are not currently captured by the
>>> RelEx NLP
>>> system, and that's a hard problem for current computational linguistics
>>> technology
>>> to grapple with.
>>>
>>> I think it can be handled via embodiment, i.e. via having an AI system
>>> observe
>>> the usage of various senses of "with" in various embodied contexts.
>>>
>>> Potentially it could also be handled via statistical-linguistics methods
>>> (where the
>>> contexts are then various documents the senses of "with" have occurred in,
>>> rather
>>> than embodied situations), though I'm more skeptical of this method.
>>>
>>> In a knowledge entry context, this means that current best-of-breed NL
>>> interpretation systems will parse
>>>
>>> People eat food with forks
>>>
>>> People eat food with friend
>>>
>>> People eat food with ketchup
>>>
>>> into similarly-structured logical relationships.
>>>
>>> This is just fine, but what it tells you is that **reformulating English
>>> into logical
>>> formalism does not, in itself, solve the disambiguation problem**.
>>>
>>> The disambiguation problem remains, just on the level of disambiguating
>>> formal-logic structures into less ambiguous ones.
>>>
>>> Using a formal language like CycL to enter knowledge is one way of largely
>>> circumventing this problem ... using Lojban would be another ...
>>>
>>> (Again I stress that having humans encode knowledge is NOT my favored
>>> approach to AGI, but I'm just commenting on some of the issues involved
>>> anyway...)
>>>
>>> -- Ben G
>>>
>>>
>>> ________________________________
>>> agi | Archives | Modify Your Subscription
>>> ________________________________
>>> agi | Archives | Modify Your Subscription
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED]
>>
>> "Nothing will ever be attempted if all possible objections must be first
>> overcome "  - Dr Samuel Johnson
>>
>>
>> ________________________________
>> agi | Archives | Modify Your Subscription
>>
>> whist
>> ________________________________
>> agi | Archives | Modify Your Subscription
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to