Re: [agi] Parsing theories

2007-05-23 Thread Lukasz Stafiniak

On 5/23/07, Mark Waser <[EMAIL PROTECTED]> wrote:

systems in that there has been success in processing huge amounts (corpuses,
corpi? :-) of data and producing results -- but it's *clearly* not the way


corpora

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-23 Thread Mark Waser
> As I think about it, one problem is, depending on how its
> parametrized, its not going to build much of a world model.
> Say for example it uses trigrams. The average hs grad knows
> something like 50,000 words. So there are something like 10^17
> trigrams. It will never see enough data to build a model capturing
> much semantics, unless it builds an incredibly compact model,
> in which case-- what is the underlying structure and how
> (computationally) are you going to learn it?

Absolutely correct.  That's why I said "My belief is that if you had the proper 
structure-building learning algorithms that your operator grammar system would 
simply (re-)discover the 
basic parts of speech and would then successfully proceed from there."  and why 
I slammed it for ""reinventing the wheel" in terms of it's unnecessary 
generalization of dependency"

> In unsupervised learning, you can learn a lot,
> say you can cluster the world into two clusters. But until you get 
> supervision, you can't learn the final few bits to distinguish good
> from bad, or whatever.

I'm afraid that I disagree completely with the latter sentence.

> Operator grammar might be very useful for
> getting a structure that could then be rapidly trained to produce
> meaning, but I don't think you can finish the job until you interact
> with sensation.

It seems as if you're now talking sensory fusion (which is a whole 'nother can 
o' worms).

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum


>> Also, I don't see how you can call a model "semantic" when it makes
>> no reference to the world.

Mark> Ah, but this is where it gets tricky.  While the model makes no
Mark> reference to the world, it is certainly influenced by the fact
Mark> that 100% of it's data comes from the world -- which then forces
Mark> the model to build itself based upon the world
Mark> (i.e. effectively, it is building a world model) -- and I would
Mark> certainly call that semantics.

As I think about it, one problem is, depending on how its
parametrized, its not going to build much of a world model.
Say for example it uses trigrams. The average hs grad knows
something like 50,000 words. So there are something like 10^17
trigrams. It will never see enough data to build a model capturing
much semantics, unless it builds an incredibly compact model,
in which case-- what is the underlying structure and how
(computationally) are you going to learn it?

>> natural or highly unlikely, but unless I misunderstand something,
>> there is no possibility it could tell me whether a sentence
>> describes a scene.

Mark> Do you mean that it couldn't perform sensory fusion or that it
Mark> can't recognize "meaning"?  I would agree with the former but
Mark> (as an opinion -- because I can't definitively prove it)
Mark> disagree with the latter.

If adequately trained (a big if) it could perhaps distinguish a meaningful
sentence from an unlikely one. The situation might be analagous to 
unsupervised learning. In unsupervised learning, you can learn a lot,
say you can cluster the world into two clusters. But until you get 
supervision, you can't learn the final few bits to distinguish good
from bad, or whatever. Operator grammar might be very useful for
getting a structure that could then be rapidly trained to produce
meaning, but I don't think you can finish the job until you interact
with sensation.

Mark> Mark



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-23 Thread Benjamin Goertzel




Also, I don't see how you can call a model "semantic" when it makes
no reference to the world. The model as described by Wikipedia
could have the capability of telling me whether a sentence is
natural or highly unlikely, but unless I misunderstand something,
there is no possibility it could tell me whether a sentence
describes a scene.




That is really a philosophical point: it seems to be a special case of
linguistic
structuralism

http://en.wikipedia.org/wiki/Structuralism

in the spirit of Saussure  In this approach one studies language as a
system
of interrelating signs ... e.g. "large" is defined in terms of its
relationship to "small"
and "huge" rather than in terms of its relationship to the physical
world

So yeah: you can't tell from linguistic structure alone if a sentence
describes
a real scene or an imaginary scene. But you might be able to tell if it
defines
a scene or not by looking at the collection of linguistic relationships
generally
needed to define a scene...

I tend to think that structuralist linguistics points out some important
aspects
that are commonly overlooked in other linguistic paradigms, but also
somewhat
overstates things...

Arguably Saussure was the grand-daddy of corpus linguistics...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Parsing theories

2007-05-23 Thread Mark Waser
I'll take a shot at answering some of your questions as someone who has done 
some work and research but is certainly not claiming to be an expert . . . .



Wikipedia says that various quantities are "learnable" because they can
in principle be determined by data. What is known about whether they
are efficiently learnable, e.g. (a) whether a child would acquire enough
data to learn the language and (b) whether given the data, learning
the language would be computationally feasible? (e.g. polynomial
time.)


Operator grammar in many respects reminds me of conceptual classification 
systems in that there has been success in processing huge amounts (corpuses, 
corpi? :-) of data and producing results -- but it's *clearly* not the way 
in which humans (i.e. human children) do it.


My belief is that if you had the proper structure-building learning 
algorithms that your operator grammar system would simply (re-)discover the 
basic parts of speech and would then successfully proceed from there.  I 
suspect that doing so is probably even computationally feasible 
(particularly if you accidentally bias it -- which would be *really* tough 
to avoid).


All human languages fundamentally have the same basic parts of speech.  I 
believe that operator grammar is "reinventing the wheel" in terms of it's 
unnecessary generalization of dependency.



Is there empirical work with this model?


It depends upon what you mean.  My current project is "posit an 
underlying structure" of the basic parts of speech.  Does it count -- or 
would I need to (IMO foolishly ;-) discard that for it to count?



Also, I don't see how you can call a model "semantic" when it makes
no reference to the world.


Ah, but this is where it gets tricky.  While the model makes no reference to 
the world, it is certainly influenced by the fact that 100% of it's data 
comes from the world -- which then forces the model to build itself based 
upon the world (i.e. effectively, it is building a world model) -- and I 
would certainly call that semantics.



natural or highly unlikely, but unless I misunderstand something,
there is no possibility it could tell me whether a sentence
describes a scene.


Do you mean that it couldn't perform sensory fusion or that it can't 
recognize "meaning"?  I would agree with the former but (as an opinion --  
because I can't definitively prove it) disagree with the latter.


   Mark


- Original Message - 
From: "Eric Baum" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, May 23, 2007 9:36 AM
Subject: Re: [agi] Parsing theories




This is based purely on reading the wikipedia entry on Operator
grammar, which I find very interesting. I'm hoping someone out there
knows enough about this to answer some questions :^)

Wikipedia says that various quantities are "learnable" because they can
in principle be determined by data. What is known about whether they
are efficiently learnable, e.g. (a) whether a child would acquire enough
data to learn the language and (b) whether given the data, learning
the language would be computationally feasible? (e.g. polynomial
time.)

Keep in mind that, you have to learn the language well enough to
deal with the fact that you can generate and understand (and thus
pretty much have to be able to calculate the likelihood of) a
virtually infinite number of sentences never before seen.

I presume the answer to these two questions (how much data you need
and how easy it is to learn from it) will depend on how you
parametrize the various knowledge you learn. So, for example,
take a word that takes two arguments. One way to parametrize
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. So one might
imagine parametrizing these "learned" coherent selection tables
in some powerful way that exposes underlying structure.
If you just use lookup tables, I'm guessing learning is
computationally trivial, but data requirements are prohibitive.
On the other hand, if you posit underlying structure, you can no
doubt lower the amount of data required to be able to deal with
novel sentences, but I would expect you'd run into the standard
problems that finding the optimal structure becomes NP-hard.
At this point, a heuristic might or might not suffice, it would
be an empirical question.

Is there empirical work with this model?

Also, I don't see how you can call a model "semantic" when it makes
no reference to the world. The model as described by Wikipedia
could have the capability of telling me whether a sentence is
natural or highly unlikely, but unless I misunderstand something,
there is no possibility it co

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum

A google search on "operator grammar" + trigram
yields nada.

A google search on "operator grammar" + bigram yields nothing
interesting.

I've seen papers on statistical language parsing before,
including trigrams etc. Not so clear to me the extent to which
they've been merged with Harris's work.


Jean-Paul> Check "bigrams" (or, more interestingly, "trigrams") in
Jean-Paul> computational linguistics.
 
 
Jean-Paul> Department of Information Systems Email:
Jean-Paul> [EMAIL PROTECTED] Phone: (+27)-(0)21-6504256
Jean-Paul> Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21


 Eric Baum <[EMAIL PROTECTED]> 2007/05/23 15:36:20 >>>

Jean-Paul> One way to parametrize the likelihood of various arguments
Jean-Paul> would be with a table over all two word combinations, the
Jean-Paul> i,j entry gives the likelihood that the ith word and the
Jean-Paul> jth word are the two arguments.  But most likely, in
Jean-Paul> reality, the likelihood of the jth word will be much pinned
Jean-Paul> down conditional on the ith.

Jean-Paul> Is there empirical work with this model?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-23 Thread Jean-Paul Van Belle
Check "bigrams" (or, more interestingly, "trigrams") in computational
linguistics.
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


>>> Eric Baum <[EMAIL PROTECTED]> 2007/05/23 15:36:20 >>>

One way to parametrize 
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. 

Is there empirical work with this model?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum

This is based purely on reading the wikipedia entry on Operator
grammar, which I find very interesting. I'm hoping someone out there
knows enough about this to answer some questions :^) 

Wikipedia says that various quantities are "learnable" because they can
in principle be determined by data. What is known about whether they
are efficiently learnable, e.g. (a) whether a child would acquire enough
data to learn the language and (b) whether given the data, learning
the language would be computationally feasible? (e.g. polynomial
time.)

Keep in mind that, you have to learn the language well enough to 
deal with the fact that you can generate and understand (and thus
pretty much have to be able to calculate the likelihood of) a
virtually infinite number of sentences never before seen.

I presume the answer to these two questions (how much data you need
and how easy it is to learn from it) will depend on how you
parametrize the various knowledge you learn. So, for example,
take a word that takes two arguments. One way to parametrize 
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. So one might
imagine parametrizing these "learned" coherent selection tables
in some powerful way that exposes underlying structure.
If you just use lookup tables, I'm guessing learning is
computationally trivial, but data requirements are prohibitive.
On the other hand, if you posit underlying structure, you can no
doubt lower the amount of data required to be able to deal with
novel sentences, but I would expect you'd run into the standard
problems that finding the optimal structure becomes NP-hard.
At this point, a heuristic might or might not suffice, it would
be an empirical question.

Is there empirical work with this model?

Also, I don't see how you can call a model "semantic" when it makes
no reference to the world. The model as described by Wikipedia
could have the capability of telling me whether a sentence is
natural or highly unlikely, but unless I misunderstand something,
there is no possibility it could tell me whether a sentence
describes a scene.

Matt> --- Chuck Esterbrook <[EMAIL PROTECTED]> wrote:

>> Any opinions on Operator Grammar vs. Link Grammar?
>> 
>> http://en.wikipedia.org/wiki/Operator_Grammar
>> 
>> http://en.wikipedia.org/wiki/Link_grammar
>> 
>> Link Grammar seems to have spawned practical software, but Operator
>> Grammar has some compelling ideas including coherent selection,
>> information content and more. Maybe these ideas are too hard or too
>> ill-defined to implement?
>> 
>> Or, in other words, why does Link Grammar win the GoogleFight?
>> 
Matt> 
http://www.googlefight.com/index.php?lang=en_GB&word1=%22link+grammar%22&word2=%22operator+grammar%22
>> (http://tinyurl.com/yvu9xr)

Matt> Link grammar has a website and online demo at
Matt> http://www.link.cs.cmu.edu/link/submit-sentence-4.html

Matt> But as I posted earlier, it gives the same parse for:

Matt> - I ate pizza with pepperoni.  - I ate pizza with a friend.  - I
Matt> ate pizza with a fork.

Matt> which shows that you can't separate syntax and semantics.  Many
Matt> grammars have this problem.

Matt> Operator grammar seems to me to be a lot closer to the way
Matt> natural language actually works.  It includes semantics.  The
Matt> basic constraints (dependency, likelihood, and reduction) are
Matt> all learnable.  It might have gotten less attention because its
Matt> main proponent, Zellig Harris, died in 1992, just before it
Matt> became feasible to test the grammar in computational models
Matt> (e.g.  perplexity or text compression).  Also, none of his
Matt> publications are online, but you can find reviews of his books
Matt> at http://www.dmi.columbia.edu/zellig/


Matt> -- Matt Mahoney, [EMAIL PROTECTED]

Matt> - This list is sponsored by AGIRI:
Matt> http://www.agiri.org/email To unsubscribe or change your
Matt> options, please go to:
Matt> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-22 Thread Lukasz Stafiniak

On 5/22/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:


Link grammar has a website and online demo at
http://www.link.cs.cmu.edu/link/submit-sentence-4.html

But as I posted earlier, it gives the same parse for:

- I ate pizza with pepperoni.
- I ate pizza with a friend.
- I ate pizza with a fork.

which shows that you can't separate syntax and semantics.  Many grammars have
this problem.


Link grammar (similarily to CFG and most other approaches that don't
already do it) can be extended with feature structures to be unified
online in the run of the parser (leading to so called unification
grammars). Nothing stops you from putting semantical information into
these structures as long as it is monotonic.
(Of course you cannot machine-learn these structures out of pure air.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-22 Thread J Storrs Hall, PhD
Unfortunately, no -- I knew Jones when he was at Bell Labs, so a lot of what I 
know about APNs isn't from published papers. 

Now he's moved on and BL is not what it used to be, and I have no idea what 
ever happened to all the work that got done there on APNs. Lucent sure isn't 
doing it, and AT&T (now Shannon) Labs trashed their AI section a few years 
back.

Josh


On Tuesday 22 May 2007 02:39:59 pm Chuck Esterbrook wrote:
> On 5/22/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> > I'm not doing any active work on it at the moment, but my favorite 
approach
> > has been Mark Jones' active production networks, which are one of those
> > schemes that lies in the twilight between symbolic and connectionist. Like
> > Copycat, it is based on a semantic net with spreading activation and 
variable
> > connection strengths. The network looks like the tree of a grammar, with 
lots
> > of extra links, and the text is fed in by sequentialy "lighting up" the
> > terminal nodes that correspond to words. After each one, the network
> > reconfigures itself to interpret the next word/phrase appropriately.
> >
> > There is no formal distinction between nodes holding syntactic and 
semantic
> > information. Indeed, if you "light up" nodes corresponding to a semantic
> > situation, the network can be jogged to produce sentences describing it.
> 
> Sounds interesting. I found some papers on it, but couldn't locate a
> home page for Jones or the software. Do you have any good URLs to
> share that Google isn't coughing up?
> 
> -Chuck
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
> 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-22 Thread Mark Waser

Any opinions on Operator Grammar vs. Link Grammar?


A Link Grammar parser is relatively easy to implement and has low system 
requirements.  Link Grammar uses (and depends upon) the phenomenon of 
planarity to (reasonably effectively) identify the parts of speech in 
English grammar.  As such, it primarily deals with syntax (simpler) rather 
than semantics (much tougher).


Operator Grammar deals much more with semantics.

My personal opinion is that -- contrary to what is stated in Wikipedia 
(where it says "The categories in Operator Grammar are universal and are 
defined purely in terms of how words relate to other words, and do not rely 
on an external set of categories such as noun, verb, adjective, adverb, 
preposition, conjunction, etc. The dependency properties of each word are 
observable through usage and therefore learnable") -- the most effective way 
to use Operator Grammar concepts is to implement them as the next step after 
a Link Grammar parser.


While it may be true that "the dependency properties of each word are 
observable through usage and therefore learnable", it is my personal belief 
that the "external set of categories such as noun, verb, adjective, adverb, 
preposition, conjunction, etc" are pretty much universal and that 
re-learning them is a waste of time.


Using the concepts of Operator Grammar can help disambiguate situations 
where the Link Parser returns several viable parses, can help to learn new 
words, and an important step in moving from syntax (which is all that the 
Link Grammar parser really does) to semantics (which is really what Operator 
Grammar does) and then meaning and understanding.  I actually re-invented 
(in a manner of speaking) dependency before I ran across it in Operator 
Grammar and have subsequently learned much from Harris's treatments of both 
dependency and reduction and have implemented a lot of them in my current 
project.


So -- again, in my opinion -- Link Grammar wins the Google fight because it 
is much easier; but, you really need both to get anywhere (plus a huge dose 
of Construction Grammar -- which you'll notice just barely wins a Google 
fight with Link Grammar :-).


   Mark

- Original Message - 
From: "Chuck Esterbrook" <[EMAIL PROTECTED]>

To: 
Sent: Monday, May 21, 2007 10:24 PM
Subject: [agi] Parsing theories



Any opinions on Operator Grammar vs. Link Grammar?

http://en.wikipedia.org/wiki/Operator_Grammar

http://en.wikipedia.org/wiki/Link_grammar

Link Grammar seems to have spawned practical software, but Operator
Grammar has some compelling ideas including coherent selection,
information content and more. Maybe these ideas are too hard or too
ill-defined to implement?

Or, in other words, why does Link Grammar win the GoogleFight?
http://www.googlefight.com/index.php?lang=en_GB&word1=%22link+grammar%22&word2=%22operator+grammar%22
(http://tinyurl.com/yvu9xr)

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Parsing theories

2007-05-22 Thread Chuck Esterbrook

On 5/22/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:

I'm not doing any active work on it at the moment, but my favorite approach
has been Mark Jones' active production networks, which are one of those
schemes that lies in the twilight between symbolic and connectionist. Like
Copycat, it is based on a semantic net with spreading activation and variable
connection strengths. The network looks like the tree of a grammar, with lots
of extra links, and the text is fed in by sequentialy "lighting up" the
terminal nodes that correspond to words. After each one, the network
reconfigures itself to interpret the next word/phrase appropriately.

There is no formal distinction between nodes holding syntactic and semantic
information. Indeed, if you "light up" nodes corresponding to a semantic
situation, the network can be jogged to produce sentences describing it.


Sounds interesting. I found some papers on it, but couldn't locate a
home page for Jones or the software. Do you have any good URLs to
share that Google isn't coughing up?

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Parsing theories

2007-05-22 Thread Matt Mahoney

--- Chuck Esterbrook <[EMAIL PROTECTED]> wrote:

> Any opinions on Operator Grammar vs. Link Grammar?
> 
> http://en.wikipedia.org/wiki/Operator_Grammar
> 
> http://en.wikipedia.org/wiki/Link_grammar
> 
> Link Grammar seems to have spawned practical software, but Operator
> Grammar has some compelling ideas including coherent selection,
> information content and more. Maybe these ideas are too hard or too
> ill-defined to implement?
> 
> Or, in other words, why does Link Grammar win the GoogleFight?
>
http://www.googlefight.com/index.php?lang=en_GB&word1=%22link+grammar%22&word2=%22operator+grammar%22
> (http://tinyurl.com/yvu9xr)

Link grammar has a website and online demo at
http://www.link.cs.cmu.edu/link/submit-sentence-4.html

But as I posted earlier, it gives the same parse for:

- I ate pizza with pepperoni.
- I ate pizza with a friend.
- I ate pizza with a fork.

which shows that you can't separate syntax and semantics.  Many grammars have
this problem.

Operator grammar seems to me to be a lot closer to the way natural language
actually works.  It includes semantics.  The basic constraints (dependency,
likelihood, and reduction) are all learnable.  It might have gotten less
attention because its main proponent, Zellig Harris, died in 1992, just before
it became feasible to test the grammar in computational models (e.g.
perplexity or text compression).  Also, none of his publications are online,
but you can find reviews of his books at http://www.dmi.columbia.edu/zellig/


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Parsing theories

2007-05-22 Thread J Storrs Hall, PhD
I'm not doing any active work on it at the moment, but my favorite approach 
has been Mark Jones' active production networks, which are one of those 
schemes that lies in the twilight between symbolic and connectionist. Like 
Copycat, it is based on a semantic net with spreading activation and variable 
connection strengths. The network looks like the tree of a grammar, with lots 
of extra links, and the text is fed in by sequentialy "lighting up" the 
terminal nodes that correspond to words. After each one, the network 
reconfigures itself to interpret the next word/phrase appropriately. 

There is no formal distinction between nodes holding syntactic and semantic 
information. Indeed, if you "light up" nodes corresponding to a semantic 
situation, the network can be jogged to produce sentences describing it.

Josh

On Monday 21 May 2007 10:24:21 pm Chuck Esterbrook wrote:
> Any opinions on Operator Grammar vs. Link Grammar?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Parsing theories

2007-05-21 Thread Lukasz Stafiniak

On 5/22/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:

Any opinions on Operator Grammar vs. Link Grammar?

http://en.wikipedia.org/wiki/Operator_Grammar


If you are intrested in Operator Grammar, perhaps you would also want
to take a look at Grammatical Framework:

http://www.cs.chalmers.se/~aarne/GF/

P.S. My first response might be too quick. Operator Grammar is
certainly worth taking a closer look (it skipped my attention before).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Parsing theories

2007-05-21 Thread Lukasz Stafiniak

On 5/22/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:

Any opinions on Operator Grammar vs. Link Grammar?

http://en.wikipedia.org/wiki/Operator_Grammar

http://en.wikipedia.org/wiki/Link_grammar


That wiki article is too little to judge, but I'd say that operator
grammar takes most ideas from dependency grammar and some (stress on
semantics) from categorial grammar. It is also probabilistic by
default, while other approaches add probabilities as an after-thought.

But operator grammar is not a "main player in the field" (correct me
if I'm wrong). The main players are:
HPSG, LFG, xTAG, dependency grammars (including multidimensional), CCG.
The entry point are "context free" unification grammars.
For what I know, link grammar has not yet been "lifted" into a
unification grammar.


--

"Any sufficiently advanced linguistic framework is indistinguishable from HPSG."
(an application of Clarke's third law)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Parsing theories

2007-05-21 Thread Benjamin Goertzel

Handling syntax separately from semantics and pragmatics is hacky
and non-AGI-ish ... but, makes it easier to get NLP systems working at a
primitive level in a non-embodied context

Operator grammar mixes syntax and semantics which is philosophically
correct, but makes things harder

Link grammar is purely syntactic, which is philosophically wrong, but makes
things implementationally easier

I have worked a lot with the link parser and it is pretty good for a
rule-based statistical parser.  But this kind of NLP framework has intrinsic
limitations.

The way we intend to ultimately do NLP in Novamente has more in common with
operator grammar ... but we have used the link parser for commercial NLP
projects, because it (sorta) works...

-- Ben G


On 5/21/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:


Any opinions on Operator Grammar vs. Link Grammar?

http://en.wikipedia.org/wiki/Operator_Grammar

http://en.wikipedia.org/wiki/Link_grammar

Link Grammar seems to have spawned practical software, but Operator
Grammar has some compelling ideas including coherent selection,
information content and more. Maybe these ideas are too hard or too
ill-defined to implement?

Or, in other words, why does Link Grammar win the GoogleFight?

http://www.googlefight.com/index.php?lang=en_GB&word1=%22link+grammar%22&word2=%22operator+grammar%22
(http://tinyurl.com/yvu9xr)

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936