Lukasz,
Thanks for the information about Word Grammar, which for anyone else 
interested, is described here, 

You asked:
(4) I'm interested in how do you handle backtracking: giving up on
application of a construction when it leads to inconsistency.
Chart-based unification parsing can be optimized to share applications
of constructions which are "parallel", and this can be extended to
operators which are (like unification) monotonic, e.g. cannot make
unsatisfiable/inconsistent state a satisfiable/consistent one. Merging
conjuncts new facts to old ones so it is monotonic in monotonic
logics. (Default/defeasible logics are nonmonotonic.)



My first solution to this problem is to postpone it by employing a controlled 
English, in which such constructions will be avoided if possible.  Secondly, 
Jerry Ball demonstrated his solution in Double R Grammar at the 2007 AAAI Fall 
Symposium, Cognitive Approaches to NLP.  His slide presentation is here, which 
I think fully addresses your issues.  To summarize Dr. Ball's ideas, which I 
will ultimately adopt for Texai:
Serial processing [word by word parsing] with algorithmic backtracking has no 
hope for on-line processing in real-time in a large coverage NLP system.The 
solution is serial processing without backtrackingIf current input is 
unexpected given the prior context, then accommodate the input by adjusting the 
representation [parse state] and coerce the input into the representationDr. 
Ball gives as an example, parsing the utterance "no airspeed or altitude 
restrictions".    Upon processing the word "or", the conjunction is 
accommodated via function overriding in his grammar, not by backing up.

(4a) Does the fact that your parser is incremental mean that you do
 "early commitment" to constructions? (Double R Grammar seems to
 support early commitment when there is choice, but backtracking is
 still needed to get an interpretation when there are only ones without
 it.)

  
Yes, my parser makes the earliest possible commitment to a construction, but I 
allow subsequent elaboration of constructions as new constituents are 
recognized.  For example, in my use case sentence "the book is on the table", I 
recognize a initial Situation Referring Expression construction to cover the 
partial utterance "the book is", which is elaborated to form the final 
Situation Referring Expression when the remaining utterance "on the table" is 
processed.

I regret that some aspects of my implementation are difficult to follow because 
I am using Jerry Ball's Double R Grammar, but not his ACT-R Lisp engine,  using 
instead my own incremental, cognitively plausible, version of Luc Steel's Fluid 
Construction Grammar engine.  I combined these two systems because Jerry Ball's 
engine is not reversible, Luc Steel's grammar is not a good coverage of 
English, and the otherwise excellent Fluid Construction Grammar engine is not 
incremental. 

-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

----- Original Message ----
From: Lukasz Stafiniak <[EMAIL PROTECTED]>
To: [email protected]
Sent: Sunday, April 13, 2008 3:04:07 PM
Subject: Re: [agi] Between logical semantics and linguistic semantics

 On Wed, Apr 9, 2008 at 6:03 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> I would be interested
> in your comments on my adoption of Fluid Construction Grammar as a solution
> to the NL  to semantics mapping problem.
>
(1) Word Grammar (WG) is a construction-free version of your approach.
It is based solely on spreading activation. It doesn't have a sharp
separation of syntax and semantics: there's only one net. Nodes
representing subgraphs corresponding to constructions can be organized
into inheritance hierarchies (extensibility). But "pure WG" makes
things very awkward logics-wise, making it work would be a lot of
research (the WG book doesn't discuss utterance generation IIRC, but
reversing parsing-interpretation seems quite direct: select the most
activated word which doesn't have a left landmark, introduce a
word-instance node for it, include spreading its activation through a
right-landmark (ignoring direction of the landmark) edge). Texai is
impure by its very nature, perhaps it could be made more (than just
sharing the spreading activation idea) of a mix WG*FCG.

(2) FCG is closer to traditional apporaches a la "computational
linguistics" than WG.

(3) One could give up some FCG features to simplify it, for example by
assuming one-to-one correspondence between constructions and atomic
predicates.

(4) I'm interested in how do you handle backtracking: giving up on
application of a construction when it leads to inconsistency.
Chart-based unification parsing can be optimized to share applications
of constructions which are "parallel", and this can be extended to
operators which are (like unification) monotonic, e.g. cannot make
unsatisfiable/inconsistent state a satisfiable/consistent one. Merging
conjuncts new facts to old ones so it is monotonic in monotonic
logics. (Default/defeasible logics are nonmonotonic.)

(4a) Does the fact that your parser is incremental mean that you do
"early commitment" to constructions? (Double R Grammar seems to
support early commitment when there is choice, but backtracking is
still needed to get an interpretation when there are only ones without
it.)

I will get to studying your sources when I'll have some time...

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com







      
____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to