Again, in bold blue below.

---- Original Message ----- 
  From: YKY (Yan King Yin) 
  To: [email protected] 
  Sent: Sunday, April 29, 2007 1:01 AM
  Subject: Re: [agi] rule-based NL system


  Mark,

  >> I need to know a bit more about your approach.  What do you mean when you 
say "grammar is embedded in your KR"?

  The knowledge representation scheme is based upon the idea that language is 
the substrate of basic cognition.  Thus, everything in it should be classified, 
categorized, viewed and treated in the way in which language has evolved to 
treat it.  This approach can give a number of insights and help restrict the 
problem.  For example, the normal KR object is viewed as a noun and the classic 
links are viewed as sentences with the nouns/objects occupying slots whose 
characteristics and behavior are inspired by grammar.  There is also a lot of 
hierarchy that is implicit in language so you immediately realize that a noun 
slot can be filled by either a simple noun or a noun clause and that the same 
holds true of verbs, etc.  You also realize that language gives you a lot of 
inheritance hierarchies -- particularly with verbs where it is most important 
for simplification and efficiently storing restrictions (i.e. how many verbs 
can be generalized to simply "move" or "give"?).  Grammar also gives you an 
excellent idea of what (additional) information you should have available at 
any given point.

  >> For an example rule like "NP --> det noun", how is it represented or 
"embedded" in your scheme? 

  Noun and object are equivalent in the scheme as long as you realize that 
objects are reducible (i.e. that an object may also be a noun phrase instead of 
a simple noun).  Links can be simple verbs or monstrous collections of facts 
themselves.

  >> Your approach may have these problems:
  >> 1.  you cannot learn a new NL;  English is hard-wired in your KR

  Nope.  Primarily I'm using those structural aspects of grammar which are 
invariant across languages.  Cognition is multi-dimensional but language is 
primarily one-dimensional.  The compression of the multiple dimensions of 
cognition down to the single dimension of language is where languages differ 
and the vast majority of that difference is labels (different words in 
different languages) and different output ordering.  The structures are 
fundamentally the same across languages.  

  As I've said before about learning a new language -- "I think that all it 
would require would be tagging each word with a language, a languageA to 
languageB dictionary, and a quick overhaul of the parser and generator to make 
the link types be language specific.  And yes, I *am* saying/claiming that I 
believe that this approach will pretty much automatically give you natural 
language translation."

  2.  you may have difficulty interpreting "irregular" sentences.  For 
examples: "Better is the enemy of good" or "I am so not into this stuff".  Your 
texts need to be 100% grammatical. 

  Nope.  This focus actually gives me a better shot at "irregular" sentences 
since not only can I tell when something is ungrammatical but I actually have a 
decent idea of what is missing so I can actively try to derive it.  Much of the 
time, sentences are ungrammatical because something is implied and left out 
i.e. "Stop that!".

  Oh, and by the way, your two sentence examples are not irregular in grammar 
at all.  What throws you in the former is that the fact that "better" is 
occupying a noun slot when it normally an adjective.  In this case, however, 
the grammar *correct* because you are talking about the noun/the concept of 
better.  What throws you in the latter is that you don't think of "very" as 
being a synonym/definition of "so".  In both cases, relying on grammar makes 
your life tremendously easier because it *tells you* when something is not 
being used in the most common fashion.

  3.  you may have problems doing "meta-linguistic" reasoning, ie, reasoning 
*about* language itself.  For example, recognizing the peculiar speech pattern 
of Yoda in Star Wars, or... (can't think of more examples now). 

  Nope.  The systems reaction to Yoda would be the same as yours . . . . What 
the heck?  His sentences have all the structures they are supposed to (i.e. 
subject, verb, object) but they're always backwards.  And then it wil cope with 
it quite well.  Note that we don't normally rely on grammatical order for the 
simplest sentences and the system needn't either after a while.

  In my approach, everything is represented by rules, therefore it has the most 
*generality*.  Your critique is that it is computationally too slow, but I can 
use the following speed-up tricks: 
  1.  human-assisted disambiguation (asking the user questions etc)
  2.  restrict to Basic English and short sentences
  3.  other heuristics to improve the inference engine, eg using word frequency 
statistics

  It's a fundamental trade-off -- speed for flexibility.  Your "speed up" 
tricks *really* hammer your flexibility in ways that I seriously dislike.  
Option 1 needs humans.  Option 2 restricts your input -- probably to the extent 
that you're going to have a problem finding input.  Option 3 involves a lot of 
computation (to the extent that I would argue that it is computationally 
infeasible to get the magnitude of effect that you are looking for).

  Good engineering means intelligently determining your trade-offs and 
realizing exactly what you're giving away for what you're gaining.  I don't see 
you as currently having the necessary knowledge to accurately assess the 
effects of your decisions.

  My main focus is on *integrating* reasoning (particularly abduction and 
deduction), NLP, and truth maintenance.  My emphasis is on the big picture and 
I prefer to build a general-but-slow system rather than an 
efficient-but-limited one.

  Ah, the AIXI argument -- Given infinite computational resources I can do 
anything . . . . Do you know the difference between a computer scientist and a 
computer engineer?  :-) 

  But I'm interested in working with people with different foci, so our skills 
can be complementary.  A lso it's possible that we work on a common system 
while exploring slightly different directions.

  Currently, our directions aren't "slightly" different.  You make a great 
sounding board but where we differ is pretty fundamental.  I'm arguing an 
engineering approach while you're saying that we *could* build that bridge 
molecule by molecule.

  On 4/28/07, Mark Waser <[EMAIL PROTECTED]> wrote: 
   
  > You take your route and I'll take mine.  I'm curious though (and this is a 
*major* first question) -- Are you going to allow people to define new terms? 
   
  $$$$$$$$$$$$$$$$$
  I will allow people to define new words -- it's not that difficult, it seems.

  Allowing people to define words is not difficult at all.  The implications, 
however, are horrendous unless you have a scheme to handle the combinatorial 
explosion much less the probability of error, etc.  You seem oblivious to this.

  > Second, I don't believe that it is possible to learn complex grammar rules 
(via machine learning or any other method) unless you have a certain *rather 
large* amount of knowledge.
   
  $$$$$$$$$$$$$$$$$$$
  Agreed, and therefore I don't try to tackle this initially.  That's why I'll 
use Basic English.

  > I apparently wasn't clear.  By paraphrasing, I didn't mean re-arranging the 
sentence so that the grammar was different.  I meant using different words with 
the same meaning.  Whenever my system encounters a new word, it is going to 
ensure that it understands that word (by being able to translate it down to 
Basic English) or else it won't accept/use that word.  Is your system going to 
have the same requirement?  If so, I will withdraw my statement but if not I'll 
ask . . . . "What do you mean by understanding" since I will certainly argue 
that mere grammatical e-arrangement is NOT understanding. 
   

  $$$$$$$$$$$$$$$$$$$$$
  Yes, I'm talking about paraphrase in the broadest sense - with change of 
wording etc.
  1. Given 2 sentences A and B, if the user asks the query "Is A a paraphrase 
of B?" I think a logical reasoner can answer it.  This is just theorem proving.

  It's just theorem proving if your system has the necessary information in it. 
 It will be impossibly slow even if your system does have the necessary 
information in it unless you done some serious engineering and I also don't see 
you engineering your system to have a higher probability of having the 
necessary information or being able to go after it effectively.  This statement 
is equivalent to the Cyc approach.  Been there, done that, doesn't work in the 
real world (unless you have some new insight to offer).

  A fundamental point to my system is first to be able to paraphrase to a 
common ground and corerctly recognize the similarities and differences between 
two statements.  Eventually (a long way off), I'd love for it to be able to 
*effectively* analogize.  Using ineffective, tremendously generic methods isn't 
going to get me there.  Yes, I am trading off flexibility -- but it's 
flexibility that I don't need for structure that should make my goal much more 
easily reached.

  2. If you want the system to spontaneously recognize that A paraphrases B 
(without being asked explicitly) then the problem requires forward-chaining, 
and may suffer from combinatorial explosion.  But still, it can be done.

  It will suffer from combinatorial explosion; therefore, it *can't* be done in 
the real world.  We're starting to reach the limits of effective discussion 
here.  Arguing AIXI and similar things is like debating how many angels can 
dance on the head of a pin.  I'm just not interested. 

  > My method, is to build the system to the point of understanding Basic 
English + adult grammar by loading it all into my KR and building tools to 
compactly load new knowledge (including new terms) into the KR, to actively 
harvest new terms from one or more dictionaries and load it into the KR, and to 
then harvest knowledge from encyclopedias and load it into the KR.  Note that 
this method is *NOT* particularly harder because adult level grammar is not 
that much more difficult than what (A) requires and contrary to what you state, 
(B) does NOT require *ANY* commonsense knowledge to be hand-coded. 

  $$$$$$$$$$$$$$$$$$$$$$$$$
  By "harvesting from dictionaries" do you mean "true" understanding of 
dictionary entries, or just correlating words statistically?  Usually a 
dictionary entry uses several words or phrases to define a word.  Eg:  " Horn - 
 n. a hard permanent outgrowth, often curved and pointed, on the head of 
cattle, rhinoceroses, giraffes, and other esp. hoofed mammals, found singly, in 
pairs, or one in front of another."

  Yes, I mean "true" understanding of dictionary entries -- as in taking all of 
those nice words and putting them in my nice structured KR and ensuring that 
they have all the links that they need to paraphrase the definition in Basic 
English.  Yes, it is a **HUGE** task but that's why I'm trying to develop 
automated tools to do it instead of believing that I'm going to get humans to 
do all that work.

  YKY

------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to