Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread James Ratcliff
Interesting points, but I believe you can get around alot of the problems with 
two additional factors, 
a. using either large quantities of quality text, (ie novels, newspapers) or 
similar texts like newspapers.
b. using a interactive built in 'checker' system, assisted learning where the 
AI could consult with humans in a simple way.

Using something like this, you could check 
The moon is a dog  and see that it has a really low probabilty, and if 
something else was possibly untrue, it could ask a few humans, and poll for the 
answer
Is the moon a dog?

This should allow for a large amount of basic information to be quickly 
gathered, and of a fairly high quality.

James

Matt Mahoney [EMAIL PROTECTED] wrote: 
--- Charles D Hixson  wrote:

 Mark Waser wrote:
   The problem of logical reasoning in natural language is a pattern 
  recognition
   problem (like natural language recognition in general).  For example:
 
   - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
   - Cities have tall buildings.  New York is a city.  Therefore New 
  York has
   tall buildings.
   - Summers are hot.  July is in the summer.  Therefore July is hot.
 
   After many examples, you learn the pattern and you can solve novel 
  logic
   problems of the same form.  Repeat for many different patterns.
   
  Your built in assumptions make you think that.  There are NO readily 
  obvious patterns is the examples you gave except on obvious example of 
  standard logical inference.  Note:
 
  * In the first clause, the only repeating words are green and
Kermit.  Maybe I'd let you argue the plural of frog.
  * In the second clause, the only repeating words are tall
buildings and New York.  I'm not inclined to give you the plural
of city.  There is also the minor confusion that tall buildings
and New York are multiple words.
  * In the third clause, the only repeating words are hot and July. 
Okay, you can argue summers.
  * Across sentences, I see a regularity between the first and the
third of As are B.  C is A.  Therefore, C is B.
 
  Looks far more to me like you picked out one particular example of 
  logical inference and called it pattern matching. 
   
  I don't believe that your theory works for more than a few very small, 
  toy examples.  Further, even if it did work, there are so many 
  patterns that approaching it this way would be computationally 
  intractable without a lot of other smarts.
   
  
 It's worse than that.  Frogs are green. is a generically true 
 statement, that isn't true in most particular cases.  E.g., some frogs 
 are yellow, red, and black without any trace of green on them that I've 
 noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
 basically green, but with black spots.
 
 Worse, although Kermit is identified as a frog, Kermit is actually a 
 cartoon character.  As such, Kermit can be run over by a tank without 
 being permanently damaged.  This is not true of actual frogs.
 
 OTOH, there *IS* a pattern matching going on.  It's just not evident at 
 the level of structure (or rather only partially evident).
 
 Were I to rephrase the sentences more exactly they would go something 
 like this:
 Kermit is a representation of a frog.
 Frogs are typically thought of as being green.
 Therefore, Kermit will be displayed as largely greenish in overall hue, 
 to enhance the representation.
 
 Note that one *could* use similar logic to deduce that Miss Piggy is 
 more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
 what is being discussed here is not mandatory characteristics, but 
 representational features selected to harmonize an image with both it's 
 setting and internal symbolisms.  As such, only artistically selected 
 features are chosen to highlight, and other features are either 
 suppressed, or overridden by other artistic choices.  What is being 
 created is a dreamscape rather than a realistic image.
 
 On to the second example.  Here again one is building a dreamscape, 
 selecting harmonious imagery.  Note that it's quite possible to build a 
 dreamscape city where there are not tall buildings...or only one.  
 (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
 District of San Francisco.  Facing in many directions you can't see a 
 single building more than two stories tall.)  But it's also quite 
 realistic to imagine tall buildings.  By specifying tall buildings, one 
 filters out a different set of harmonious city images.
 
 What these patterns do is enable one to filter out harmonious images, 
 etc. from the databank of past experiences.

These are all valid criticisms.  They explain why logical reasoning in natural
language is an unsolved problem.  Obviously simple string matching won't work.
 The system must also recognize sentence structure, word associations,

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Mike Dougherty

On 6/11/07, James Ratcliff [EMAIL PROTECTED] wrote:

Interesting points, but I believe you can get around alot of the problems
with two additional factors,
a. using either large quantities of quality text, (ie novels, newspapers) or
similar texts like newspapers.
b. using a interactive built in 'checker' system, assisted learning where
the AI could consult with humans in a simple way.


I would hope that a candidate AGI would have the capability of
emailing anyone who has ever talked with it.  ex:  After a few
minutes' chat, the AI asks the human for their email in case there it
has any follow up questions - the same way any human interviewer
might.  If 10 humans are asked the same question, the statistically
oddball response can probably be ignored (or reduced in weight) to
clarify the answer.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread James Ratcliff
Correct, but I don't believe that systems (like Cyc) are doing this type of 
Active learning now, and it would help to gather quality information and 
fact-check it.  

Cyc does have some interesting projects where it takes a proposed statment and 
when a engineer is working with it, will go out and do a text match search in 
Google to check the validity of a statement, so would do soemthing like 
google search the moon is a dog returning 1/4bill  so very unlikely.

This goes one step towards my thoughts, but of course the Internet as a whole 
is not a trusted source for quality information, and would need to use a more 
refined base.

Also OpenMind Common Sense (site down) is a very interesting project which does 
some information gathering using humans who log into the system and check and 
input information.  It produced some intersting results, though on a limited 
basis.


James


Mike Dougherty [EMAIL PROTECTED] wrote: On 6/11/07, James Ratcliff  wrote:
 Interesting points, but I believe you can get around alot of the problems
 with two additional factors,
 a. using either large quantities of quality text, (ie novels, newspapers) or
 similar texts like newspapers.
 b. using a interactive built in 'checker' system, assisted learning where
 the AI could consult with humans in a simple way.

I would hope that a candidate AGI would have the capability of
emailing anyone who has ever talked with it.  ex:  After a few
minutes' chat, the AI asks the human for their email in case there it
has any follow up questions - the same way any human interviewer
might.  If 10 humans are asked the same question, the statistically
oddball response can probably be ignored (or reduced in weight) to
clarify the answer.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
 
-
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Matt Mahoney

--- James Ratcliff [EMAIL PROTECTED] wrote:

 Interesting points, but I believe you can get around alot of the problems
 with two additional factors, 
 a. using either large quantities of quality text, (ie novels, newspapers) or
 similar texts like newspapers.
 b. using a interactive built in 'checker' system, assisted learning where
 the AI could consult with humans in a simple way.

But that is not the problem I am trying to get around.  A system that learns
to solve logical word problems should be trainable on text like:

- A greeb is a floogle.  All floogles are blorg.  Therefore...

simply because it is something the human brain can do.


 
 Using something like this, you could check 
 The moon is a dog  and see that it has a really low probabilty, and if
 something else was possibly untrue, it could ask a few humans, and poll for
 the answer
 Is the moon a dog?
 
 This should allow for a large amount of basic information to be quickly
 gathered, and of a fairly high quality.
 
 James
 
 Matt Mahoney [EMAIL PROTECTED] wrote: 
 --- Charles D Hixson  wrote:
 
  Mark Waser wrote:
The problem of logical reasoning in natural language is a pattern 
   recognition
problem (like natural language recognition in general).  For example:
  
- Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
- Cities have tall buildings.  New York is a city.  Therefore New 
   York has
tall buildings.
- Summers are hot.  July is in the summer.  Therefore July is hot.
  
After many examples, you learn the pattern and you can solve novel 
   logic
problems of the same form.  Repeat for many different patterns.

   Your built in assumptions make you think that.  There are NO readily 
   obvious patterns is the examples you gave except on obvious example of 
   standard logical inference.  Note:
  
   * In the first clause, the only repeating words are green and
 Kermit.  Maybe I'd let you argue the plural of frog.
   * In the second clause, the only repeating words are tall
 buildings and New York.  I'm not inclined to give you the plural
 of city.  There is also the minor confusion that tall buildings
 and New York are multiple words.
   * In the third clause, the only repeating words are hot and July. 
 Okay, you can argue summers.
   * Across sentences, I see a regularity between the first and the
 third of As are B.  C is A.  Therefore, C is B.
  
   Looks far more to me like you picked out one particular example of 
   logical inference and called it pattern matching. 

   I don't believe that your theory works for more than a few very small, 
   toy examples.  Further, even if it did work, there are so many 
   patterns that approaching it this way would be computationally 
   intractable without a lot of other smarts.

   
  It's worse than that.  Frogs are green. is a generically true 
  statement, that isn't true in most particular cases.  E.g., some frogs 
  are yellow, red, and black without any trace of green on them that I've 
  noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
  basically green, but with black spots.
  
  Worse, although Kermit is identified as a frog, Kermit is actually a 
  cartoon character.  As such, Kermit can be run over by a tank without 
  being permanently damaged.  This is not true of actual frogs.
  
  OTOH, there *IS* a pattern matching going on.  It's just not evident at 
  the level of structure (or rather only partially evident).
  
  Were I to rephrase the sentences more exactly they would go something 
  like this:
  Kermit is a representation of a frog.
  Frogs are typically thought of as being green.
  Therefore, Kermit will be displayed as largely greenish in overall hue, 
  to enhance the representation.
  
  Note that one *could* use similar logic to deduce that Miss Piggy is 
  more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
  what is being discussed here is not mandatory characteristics, but 
  representational features selected to harmonize an image with both it's 
  setting and internal symbolisms.  As such, only artistically selected 
  features are chosen to highlight, and other features are either 
  suppressed, or overridden by other artistic choices.  What is being 
  created is a dreamscape rather than a realistic image.
  
  On to the second example.  Here again one is building a dreamscape, 
  selecting harmonious imagery.  Note that it's quite possible to build a 
  dreamscape city where there are not tall buildings...or only one.  
  (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
  District of San Francisco.  Facing in many directions you can't see a 
  single building more than two stories tall.)  But it's also quite 
  realistic to imagine tall buildings.  By specifying tall buildings, one 
  filters