Matt Mahoney <[EMAIL PROTECTED]> wrote: 
--- James Ratcliff  wrote:

> I believe that is just a simple rule that you can input in most systems, and
> it will match that point, but promgramming in each of those rules is a very
> costly affair.

Fortunately, natural language (unlike artificial language) has a structure
that allows rules to be learned, so you don't have to program them.  You only
need to program the learning algorithm and provide training data.

Im not sure we can have a real "learning algorithm" in that sense though, as 
the algorithm would have to match all those rules effectively.
We can extract and match patterns, but the actual rules that they create will 
have to be helped along by human teachers in some way.... But with each rule we 
learn, it will speed and assist in learning all the other rules we need.



> 
> James
> 
> 
> Matt Mahoney  wrote: 
> --- James Ratcliff  wrote:
> 
> > Interesting points, but I believe you can get around alot of the problems
> > with two additional factors, 
> > a. using either large quantities of quality text, (ie novels, newspapers)
> or
> > similar texts like newspapers.
> > b. using a interactive built in 'checker' system, assisted learning where
> > the AI could consult with humans in a simple way.
> 
> But that is not the problem I am trying to get around.  A system that learns
> to solve logical word problems should be trainable on text like:
> 
> - A greeb is a floogle.  All floogles are blorg.  Therefore...
> 
> simply because it is something the human brain can do.
> 
> 
> > 
> > Using something like this, you could check 
> > "The moon is a dog"  and see that it has a really low probabilty, and if
> > something else was possibly untrue, it could ask a few humans, and poll
> for
> > the answer
> > "Is the moon a dog?"
> > 
> > This should allow for a large amount of basic information to be quickly
> > gathered, and of a fairly high quality.
> > 
> > James
> > 
> > Matt Mahoney  wrote: 
> > --- Charles D Hixson  wrote:
> > 
> > > Mark Waser wrote:
> > > > >> The problem of logical reasoning in natural language is a pattern 
> > > > recognition
> > > > >> problem (like natural language recognition in general).  For
> example:
> > > >
> > > > >> - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
> > > > >> - Cities have tall buildings.  New York is a city.  Therefore New 
> > > > York has
> > > > >> tall buildings.
> > > > >> - Summers are hot.  July is in the summer.  Therefore July is hot.
> > > >
> > > > >> After many examples, you learn the pattern and you can solve novel 
> > > > logic
> > > > >> problems of the same form.  Repeat for many different patterns.
> > > >  
> > > > Your built in assumptions make you think that.  There are NO readily 
> > > > obvious patterns is the examples you gave except on obvious example of
> 
> > > > standard logical inference.  Note:
> > > >
> > > >     * In the first clause, the only repeating words are green and
> > > >       Kermit.  Maybe I'd let you argue the plural of frog.
> > > >     * In the second clause, the only repeating words are tall
> > > >       buildings and New York.  I'm not inclined to give you the plural
> > > >       of city.  There is also the minor confusion that tall buildings
> > > >       and New York are multiple words.
> > > >     * In the third clause, the only repeating words are hot and July. 
> > > >       Okay, you can argue summers.
> > > >     * Across sentences, I see a regularity between the first and the
> > > >       third of "As are B.  C is A.  Therefore, C is B."
> > > >
> > > > Looks far more to me like you picked out one particular example of 
> > > > logical inference and called it pattern matching. 
> > > >  
> > > > I don't believe that your theory works for more than a few very small,
> 
> > > > toy examples.  Further, even if it did work, there are so many 
> > > > patterns that approaching it this way would be computationally 
> > > > intractable without a lot of other smarts.
> > > >  
> > > >
> ------------------------------------------------------------------------
> > > It's worse than that.  "Frogs are green." is a generically true 
> > > statement, that isn't true in most particular cases.  E.g., some frogs 
> > > are yellow, red, and black without any trace of green on them that I've 
> > > noticed.  Most frogs may be predominately green (e.g., leopard frogs are
> 
> > > basically green, but with black spots.
> > > 
> > > Worse, although Kermit is identified as a frog, Kermit is actually a 
> > > cartoon character.  As such, Kermit can be run over by a tank without 
> > > being permanently damaged.  This is not true of actual frogs.
> > > 
> > > OTOH, there *IS* a pattern matching going on.  It's just not evident at 
> > > the level of structure (or rather only partially evident).
> > > 
> > > Were I to rephrase the sentences more exactly they would go something 
> > > like this:
> > > Kermit is a representation of a frog.
> > > Frogs are typically thought of as being green.
> > > Therefore, Kermit will be displayed as largely greenish in overall hue, 
> > > to enhance the representation.
> > > 
> > > Note that one *could* use similar "logic" to deduce that Miss Piggy is 
> > > more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
> > > what is being discussed here is not mandatory characteristics, but 
> > > representational features selected to harmonize an image with both it's 
> > > setting and internal symbolisms.  As such, only artistically selected 
> > > features are chosen to highlight, and other features are either 
> > > suppressed, or overridden by other artistic choices.  What is being 
> > > created is a "dreamscape" rather than a realistic image.
> > > 
> > > On to the second example.  Here again one is building a dreamscape, 
> > > selecting harmonious imagery.  Note that it's quite possible to build a 
> > > dreamscape city where there are not tall buildings...or only one.  
> > > (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
> > > District of San Francisco.  Facing in many directions you can't see a 
> > > single building more than two stories tall.)  But it's also quite 
> > > realistic to imagine tall buildings.  By specifying tall buildings, one 
> > > filters out a different set of harmonious city images.
> > > 
> > > What these patterns do is enable one to filter out harmonious images, 
> > > etc. from the databank of past experiences.
> > 
> > These are all valid criticisms.  They explain why logical reasoning in
> > natural
> > language is an unsolved problem.  Obviously simple string matching won't
> > work.
> >  The system must also recognize sentence structure, word associations,
> > different word forms, etc.  Doing this requires a lot of knowledge about
> > language and about the world.  After those patterns are learned (and there
> > are
> > hundreds of thousands of them), then it will be possible to learn the more
> > complex patterns associated with reasoning.
> > 
> > The other criticism is that the statements are not precisely true.  (July
> is
> > cold in Australia).  But the logic is still valid.  It should be possible
> to
> > train a purely logical system on examples using obviously false
> statements,
> > like:
> > 
> > - The moon is a dog.  All dogs are made of green cheese.  Therefore the
> moon
> > is made of green cheese.
> > 
> > The reasoning is correct, but confusing to many people.  This fact argues
> > (to
> > me anyway) that logical reasoning is not even a good model of human
> thought.
> > 
> > 
> > -- Matt Mahoney, [EMAIL PROTECTED]
> > 
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> > 
> > 
> > 
> > _______________________________________
> > James Ratcliff - http://falazar.com
> > Looking for something...
> >  
> > ---------------------------------
> > No need to miss a message. Get email on-the-go 
> > with Yahoo! Mail for Mobile. Get started.
> > 
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
> 
> 
> _______________________________________
> James Ratcliff - http://falazar.com
> Looking for something...
>         
> ---------------------------------
> Park yourself in front of a world of choices in alternative vehicles.
> Visit the Yahoo! Auto Green Center.
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Got a little couch potato? 
Check out fun summer activities for kids.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to