Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
 Amusingly, one of my projects at the moment is to show that
 Novamente's economic attention allocation module can display
 Hopfield net type content-addressable-memory behavior on simple
 examples.  As a preliminary step to integrating it with other aspects
 of Novamente cognition (reasoning, evolutionary learning, etc.)

I assume everyone here is familiar with the agorics papers of Drexler and 
Miller: http://www.agorics.com/Library/agoricpapers.html and this one of 
mine: http://autogeny.org/chsmith.html which combines agoric and genetic 
algortihms (in a system named Charles Smith :-)

Josh



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel

On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

On Monday 27 November 2006 10:35, Ben Goertzel wrote:
 Amusingly, one of my projects at the moment is to show that
 Novamente's economic attention allocation module can display
 Hopfield net type content-addressable-memory behavior on simple
 examples.  As a preliminary step to integrating it with other aspects
 of Novamente cognition (reasoning, evolutionary learning, etc.)

I assume everyone here is familiar with the agorics papers of Drexler and
Miller: http://www.agorics.com/Library/agoricpapers.html and this one of
mine: http://autogeny.org/chsmith.html which combines agoric and genetic
algortihms (in a system named Charles Smith :-)

Josh


Also related is Eric Baum's work described at

http://www.whatisthought.com/eric.html

See

Manifesto for an Evolutionary Economics of Intelligence

for theory and

Evolution of Cooperative Problem-Solving in an Artificial Economy

(or related discussion in What Is Thought?)

for a simple narrow-AI application based on the theory.

What I am doing with artificial currencies within Novamente is not
much like either Drexler  Miller's or Baum's ideas, but was
philosophically inspired by both...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
...
 An issue with Hopfield content-addressable memories is that their
 memory capability gets worse and worse as the networks get sparser and
 sparser.   I did some experiments on this in 1997, though I never
 bothered to publish the results ... 

[General observations, not aimed at Ben in particular:]

One of the reasons I'm not looking at actual Hopfield (or any other kind of 
NN) is that I think that a huge amount of what goes on in AI today is 
premature optimization. I.e. the vast majority of the technical work has more 
to do with taking operations that don't have intelligence and making them run 
fast, than with finding operations that do exhibit intelligence. My approach, 
admittedly unusual, is to assume I have all the processing power and memory I 
need, up to a generous estimate of what the brain provides (a petawords and 
100 petaMACs), and then see if I can come up with operations that do what it 
does. If not it, would be silly to try and do the same task with a machine 
one to 100 thousand times smaller.

There are plenty of cases where it's just a royal pain to get a Hopfield net 
or any other NN to do something that's blindingly simple for an ordinary 
program or vector equation. Ignore the implementation, think in the data 
representation as long as you can. When you've got that nailed, you can try 
for that factor of a thousand optimization...

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel

My approach,
admittedly unusual, is to assume I have all the processing power and memory I
need, up to a generous estimate of what the brain provides (a petawords and
100 petaMACs), and then see if I can come up with operations that do what it
does. If not it, would be silly to try and do the same task with a machine
one to 100 thousand times smaller.


Yes, this is what we've done with Novamente as well.

In fact, I am quite confident the NM architecture when fully
implemented and tuned can yield powerful AGI  The biggest open
question in my mind is exactly how much computational resources will
be required, to achieve what levels of intelligence.  We have tried to
make estimates of this but they're all pretty fudgy  It is clear
that the resource requirements won't be insane on the level of AIXI or
AIXItl, but in practical terms, a factor of 100 in computational
resource requirements makes a big difference...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz

On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote:
 You talked mainly about how sentences require vast amounts of external
 knowledge to interpret, but it does not imply that those sentences cannot
 be represented in (predicate) logical form.

Substitute bit string for predicate logic and you'll have a sentence that
is just as true and not a lot less useful.

 I think there should be a
 working memory in which sentences under attention would bring up other
 sentences by association.  For example if a person is being kicked is in
 working memory, that fact would bring up other facts such as being kicked
 causes a person to feel pain and possibly to get angry, etc.  All this is
 orthogonal to *how* the facts are represented.

Oh, I think the representation is quite important. In particular, logic lets
you in for gazillions of inferences that are totally inapropos and no good
way to say which is better. Logic also has the enormous disadvantage that you
tend to have frozen the terms and levels of abstraction. Actual word meanings
are a lot more plastic, and I'd bet internal representations are damn near
fluid.


The use of predicates for representation, and the use of logic for
reasoning, are separate issues.  I think it's pretty clear that
English sentences translate neatly into predicate logic statements,
and that such a transformation is likely a useful first step for any
sentence-understanding process.  Whether those predicates are then
used to draw conclusions according to a standard logic system, or are
used as inputs to a completely different process, is a different
matter.


The open questions are representation -- I'm leaning towards CSG in Hilbert
spaces at the moment, but that may be too computationally demanding -- and
how to form abstractions.


Does CSG = context-sensitive grammar in this case?  How would you use
Hilbert spaces?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz

On 11/27/06, Ben Goertzel [EMAIL PROTECTED] wrote:

An issue with Hopfield content-addressable memories is that their
memory capability gets worse and worse as the networks get sparser and
sparser.   I did some experiments on this in 1997, though I never
bothered to publish the results ... some of them are at:

http://www.goertzel.org/papers/ANNPaper.html


I found just the opposite - Hopfield network memory capability gets
much better as the networks get sparser, down to very low levels of
sparseness.  However, I was measuring performance as a function of
storage space and computation.  A fully-connected Hopfield network of
100 neurons has about 10,000 connections.  A Hopfield network of 100
neurons that has only 10 connections per neuron performs has one-tenth
as many connections, and can recall more than one-tenth as many
patterns.

Furthermore, if you selectively eliminate the weak connections and
save the strong connections, you can make Hopfield networks very
sparse that perform almost as well as the fully-connected ones.

BTW, the canonical results about Hopfield network capacity in the
McEliece 1987 paper are wrong - I can't find the flaw, so I don't know
why they're wrong, but I know that the paper

a) makes the mistake of comparing recall errors of a fixed number of
bits between networks of different sizes, which means that it counts a
1-bit error in recalling a 1000-node pattern as equivalent to a 1-bit
error in recalling a 10-node pattern, and

b) the paper claims that recall of n-bit patterns, starting from a
presented pattern that differs in n/2 bits, is quite good.  This is
impossible, since differing in n/2 bits means the input pattern is a
RANDOM pattern wrt the target, and half of all the targets should be
closer to the input pattern.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz

On 11/26/06, Pei Wang [EMAIL PROTECTED] wrote:

Therefore, the problem of using an n-space representation for AGI is
not its theoretical possibility (it is possible), but its practical
feasibility. I have no doubt that for many limited application,
n-space representation is the most natural and efficient choice.
However, for a general purpose system, the situation is very
different. I'm afraid for AGI we may have to need millions (if not
more) dimensions, and it won't be easy to decide in advance what
dimensions are necessary.


I see evidence of dimensionality reduction by humans in the fact that
adopting a viewpoint has such a strong effect on the kind of
information a person is able to absorb.  In conversations about
politics or religion, I often find ideas that to me seem simple, that
I cannot communicate to someone of a different viewpoint.  We both
start with the same input - some English sentences, say - but I think
we compress them in different, yet internally consistent, ways.  Their
viewpoint is based on a compression scheme that simply compresses out
what I am trying to communicate.

It may be that psychological repression is the result of compressing
out dimensions, or data, that had low utility.  Someone who is
repeatedly exposed to a trauma which they are unable to do anything
about may calculate, subconsciously, that the awareness of that trauma
is simply useless information.

Trying to suggest a PCA-like dimensionality reduction of concepts by
humans has the difficulty that a human should then remember, or be
aware of, those implications of a sentence which have had the most
variance, or the most impact, in their experiences.  In fact, we often
find people make the greatest compression along dimensions that have
the highest importance to them, compressing a whole set of important
distinctions into the binary good-evil dimension.  It may be that
our motivational system can handle only a small number of dimensions -
say, five - and that good-evil is one of the principle components
whose impact is so large we are actually aware of it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
Philip Goetz [EMAIL PROTECTED] wrote:
The use of predicates for representation, and the use of logic for
reasoning, are separate issues.  I think it's pretty clear that
English sentences translate neatly into predicate logic statements,
and that such a transformation is likely a useful first step for any
sentence-understanding process.  

I don't think it is clear at all.  Try translating some poetry.  Even for 
sentences that do have a clear representation in first order logic, the 
translation from English is not straightforward at all.  It is an unsolved 
problem.

I also dispute that it is even useful for sentence understanding.  Google 
understands simple questions, and its model is just a bag of words.  Attempts 
to apply parsing or reasoning to information retrieval have generally been a 
failure.

It would help to define what sentence-understanding means.  I say a computer 
understands English if it can correctly assign probabilities to long strings, 
where correct means ranked in the same order as judged by humans.  So a 
program that recognizes the error in the string the cat caught a moose could 
be said to understand English.  Thus, the grammar checker in Microsoft Word 
would have more understanding of a text document than a simple spell checker, 
but less understanding than most humans.  Maybe you have a different 
definition.  A reasonable definition for AI should be close to the conventional 
meaning and also be testable without making any assumption about the internals 
of the machine.

Now it seems to me that you need to understand sentences before you can 
translate them into FOL, not the other way around. Before you can translate to 
FOL you have to parse the sentence, and before you can parse it you have to 
understand it, e.g.

I ate pizza with pepperoni.
I ate pizza with a fork.

Using my definition of understanding, you have to recognize that ate with a 
fork and pizza with pepperoni rank higher than ate with pepperoni and 
pizza with a fork.  A parser needs to know millions of rules like this.
  
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 28, 2006 2:47:41 PM
Subject: Re: [agi] Understanding Natural Language

On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
 On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote:
  You talked mainly about how sentences require vast amounts of external
  knowledge to interpret, but it does not imply that those sentences cannot
  be represented in (predicate) logical form.

 Substitute bit string for predicate logic and you'll have a sentence that
 is just as true and not a lot less useful.

  I think there should be a
  working memory in which sentences under attention would bring up other
  sentences by association.  For example if a person is being kicked is in
  working memory, that fact would bring up other facts such as being kicked
  causes a person to feel pain and possibly to get angry, etc.  All this is
  orthogonal to *how* the facts are represented.

 Oh, I think the representation is quite important. In particular, logic lets
 you in for gazillions of inferences that are totally inapropos and no good
 way to say which is better. Logic also has the enormous disadvantage that you
 tend to have frozen the terms and levels of abstraction. Actual word meanings
 are a lot more plastic, and I'd bet internal representations are damn near
 fluid.

The use of predicates for representation, and the use of logic for
reasoning, are separate issues.  I think it's pretty clear that
English sentences translate neatly into predicate logic statements,
and that such a transformation is likely a useful first step for any
sentence-understanding process.  Whether those predicates are then
used to draw conclusions according to a standard logic system, or are
used as inputs to a completely different process, is a different
matter.

 The open questions are representation -- I'm leaning towards CSG in Hilbert
 spaces at the moment, but that may be too computationally demanding -- and
 how to form abstractions.

Does CSG = context-sensitive grammar in this case?  How would you use
Hilbert spaces?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 14:47, Philip Goetz wrote:
 The use of predicates for representation, and the use of logic for
 reasoning, are separate issues.  I think it's pretty clear that
 English sentences translate neatly into predicate logic statements,
 and that such a transformation is likely a useful first step for any
 sentence-understanding process.  Whether those predicates are then
 used to draw conclusions according to a standard logic system, or are
 used as inputs to a completely different process, is a different
 matter.

I would beg to differ. While it is clearly straightforward to translate a 
sencence into a predicate expression in a syntactic way, the resulting 
structure has no coherent semantics. 

Consider how much harder it is to translate a sentence of English into a 
sentence of Chinese. Even then you won't have uncovered the meat of the 
semantics, since in both languages you can rely on a lot of knowledge the 
hearer already knows. 

But when you put the sentence into predicate form, you've moved into a 
formalism where there is no such semantics behind the representation. In 
order to provide them, you have to do the equivalent of writing a Prolog 
program that could make the same predictions, explanations, or replies that a 
human speaker could to the original English sentence.

Consider the following sentences. Could you translate them all using the 
single predicate on(A,B)? If not, the translation gets messier:

On the table is an apple.
On Lake Ontario is Toronto.
On Hadamard's theory transubstantiation is ineffable.
On Comet, on Cupid, on Prancer and Vixen.
On Christmas we open presents.
On time is better than late.
On budget expenditures are dwarfed by Social Security.
On and on the list goes...

  The open questions are representation -- I'm leaning towards CSG in
  Hilbert spaces at the moment, but that may be too computationally
  demanding -- and how to form abstractions.

 Does CSG = context-sensitive grammar in this case?  How would you use
 Hilbert spaces?

Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating 
shapes in high- (possibly infinite-) dimensional spaces.

Suppose I want to represent a face as a point in a space. First, represent it 
as a raster. That is in turn a series of numbers that can be a vector in the 
space. Same face, higher resolution: more numbers, higher dimensionality 
space, but you can map the regions that represent the same face between 
higher and lower-dimensional spaces. Do it again, again, etc: take the limit 
as the resolution and dimensionality go to infinity. You can no more 
represent this explicitly than you can a real number, but you can use it as 
an abstraction, as a theory to tell you how well your approximations are 
working.

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz

I think that Matt and Josh are both misunderstanding what I said in
the same way.  Really, you're both attacking the use of logic on the
predicates, not the predicates themselves as a representation, and so
ignoring the distinction I was trying to create.  I am not saying that
rewriting English as predicates magically provides semantics.

On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

On Tuesday 28 November 2006 14:47, Philip Goetz wrote:
 The use of predicates for representation, and the use of logic for
 reasoning, are separate issues.  I think it's pretty clear that
 English sentences translate neatly into predicate logic statements,
 and that such a transformation is likely a useful first step for any
 sentence-understanding process.  Whether those predicates are then
 used to draw conclusions according to a standard logic system, or are
 used as inputs to a completely different process, is a different
 matter.

I would beg to differ. While it is clearly straightforward to translate a
sencence into a predicate expression in a syntactic way, the resulting
structure has no coherent semantics.


Translating into a predicate expression doesn't give you any semantics.
But it doesn't take any away, either.  It just gives you the sentence
in a neater form, with the hierarchies and dependencies spelled out.


Consider the following sentences. Could you translate them all using the
single predicate on(A,B)? If not, the translation gets messier:

On the table is an apple.
On Lake Ontario is Toronto.
On Hadamard's theory transubstantiation is ineffable.
On Comet, on Cupid, on Prancer and Vixen.
On Christmas we open presents.
On time is better than late.
On budget expenditures are dwarfed by Social Security.
On and on the list goes...


You used the same word on in English for each of them.
I thus get to use the same word on in a predicate representation for
each of them.
I don't claim that each instance of the predicate on means the same thing!
The application of a logic rule that matched any instance of on(A,B)
would be making such a claim, but, as I tried to explicitly point out,
that is a problem with logic, not with predicates as a representation.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz

Oops, Matt actually is making a different objection than Josh.


Now it seems to me that you need to understand sentences before you can 
translate them into FOL, not the other way around. Before you can translate to 
FOL you have to parse the sentence, and before you can parse it you have to 
understand it, e.g.

I ate pizza with pepperoni.
I ate pizza with a fork.

Using my definition of understanding, you have to recognize that ate with a fork and pizza with 
pepperoni rank higher than ate with pepperoni and pizza with a fork.  A parser needs to 
know millions of rules like this.


Yes, this is true.  When I said neatly, I didn't mean easily.  I
mean that the correct representation in predicate logic is very
similar to the English, and doesn't lose much meaning.  It was
misleading of me to say that it's a good starting point, though, since
you do have to do a lot to get those predicates.

A predicate representation can be very useful.  This doesn't mean that
you have to represent all of the predications that could be extracted
from a sentence.  The NLP system I'm working on does not, in fact, use
a parse tree, for essentially the reasons Matt just gave.  It doesn't
want to make commitments about grammatical structure, so instead it
just groups things into phrases, without deciding what the
dependencies are between those phrases, and then has a bunch of
different demons that scan those phrases looking for particular
predications.  As you find predications in the text, you can eliminate
certain choices of lexical or semantic category for words, and
eliminate arguments so that they can't be re-used in other
predications.  You never actually find the correct parse in our
system, but you could if you wanted to.  It's just that, we've already
extracted the meaning that we're interested in by the time we have
enough information to get the right parse, so the parse tree isn't of
much use.  We get the predicates that we're interested in, for the
purposes at hand.  We might never have to figure out whether pepperoni
is a part or an instrument, because we don't care.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz

On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating
shapes in high- (possibly infinite-) dimensional spaces.

Suppose I want to represent a face as a point in a space. First, represent it
as a raster. That is in turn a series of numbers that can be a vector in the
space. Same face, higher resolution: more numbers, higher dimensionality
space, but you can map the regions that represent the same face between
higher and lower-dimensional spaces. Do it again, again, etc: take the limit
as the resolution and dimensionality go to infinity. You can no more
represent this explicitly than you can a real number, but you can use it as
an abstraction, as a theory to tell you how well your approximations are
working.


I see that a raster is a vector.  I see that you can have rasters at
different resolutions.  I don't see what you mean by map the regions
that represent the same face between higher and lower-dimensional
spaces, or what you are taking the limit of as resolution goes to
infinity, or why you don't just stick with one particular resolution.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-28 Thread Philip Goetz

On 11/9/06, Eric Baum [EMAIL PROTECTED] wrote:

It is true that much modern encryption is based on simple algorithms.
However, some crypto-experts would advise more primitive approaches.
RSA is not known to be hard, even if P!=NP, someone may find a
number-theoretic trick tomorrow that factors. (Or maybe they already
have it, and choose not to publish).
If you use a mess machine like a modern version of enigma, that is
much less likely to get broken, even though you may not have the
theoretical results.


DES is essentially a big messy bit-scrambler; like Enigma, but with
bits instead of letters.  The relative security of the two approaches
is debated by cryptologists.  On one hand, RSA could be broken by a
computational trick (or a quantum computer).  On the other hand, DES
is so messy that it's very hard to be sure there isn't a foothold for
an attack, or even a deliberate backdoor, in it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
First order logic (FOL) is good for expressing simple facts like all birds 
have wings or no bird has hair, but not for statements like most birds can 
fly.  To do that you have to at least extend it with fuzzy logic (probability 
and confidence).

A second problem is, how do you ground the terms?  If you have for all X, 
bird(X) = has(X, wings), where does bird, wings, has get their 
meanings?  The terms do not map 1-1 to English words, even though we may use 
the same notation.  For example, you can talk about the wings of a building, or 
the idiom wing it.  Most words in the dictionary list several definitions 
that depend on context.  Also, words gradually change their meaning over time.

I think FOL represents complex ideas poorly.  Try translating what you just 
wrote into FOL and you will see what I mean.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 28, 2006 5:45:51 PM
Subject: Re: [agi] Understanding Natural Language

Oops, Matt actually is making a different objection than Josh.

 Now it seems to me that you need to understand sentences before you can 
 translate them into FOL, not the other way around. Before you can translate 
 to FOL you have to parse the sentence, and before you can parse it you have 
 to understand it, e.g.

 I ate pizza with pepperoni.
 I ate pizza with a fork.

 Using my definition of understanding, you have to recognize that ate with a 
 fork and pizza with pepperoni rank higher than ate with pepperoni and 
 pizza with a fork.  A parser needs to know millions of rules like this.

Yes, this is true.  When I said neatly, I didn't mean easily.  I
mean that the correct representation in predicate logic is very
similar to the English, and doesn't lose much meaning.  It was
misleading of me to say that it's a good starting point, though, since
you do have to do a lot to get those predicates.

A predicate representation can be very useful.  This doesn't mean that
you have to represent all of the predications that could be extracted
from a sentence.  The NLP system I'm working on does not, in fact, use
a parse tree, for essentially the reasons Matt just gave.  It doesn't
want to make commitments about grammatical structure, so instead it
just groups things into phrases, without deciding what the
dependencies are between those phrases, and then has a bunch of
different demons that scan those phrases looking for particular
predications.  As you find predications in the text, you can eliminate
certain choices of lexical or semantic category for words, and
eliminate arguments so that they can't be re-used in other
predications.  You never actually find the correct parse in our
system, but you could if you wanted to.  It's just that, we've already
extracted the meaning that we're interested in by the time we have
enough information to get the right parse, so the parse tree isn't of
much use.  We get the predicates that we're interested in, for the
purposes at hand.  We might never have to figure out whether pepperoni
is a part or an instrument, because we don't care.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303