This has nothing to do with whether "we can" or "can't" -  & never has had - 
unless you're wedded to your textbooks and pre-set methods.

It's all a question of HOW we can and do.


From: Piaget Modeler 
Sent: Monday, June 18, 2012 1:43 AM
To: AGI 
Subject: RE: [agi] Real World Reasoning


...Stepping into the fray....(without body armor)....

People have expectations (i.e., predictions) about everything, and use 
predictions continually from moment to moment. 
Machines can be instructed to do the same. 


As regards reasoning,  there are many forms of reasoning  from predicate 
calculus, to multi-strategy inference, to simple integration
(i.e, crossover) and differentiation (in the Piagetian sense), all of which can 
be programmed into a machine. 


To me, this all sounds like two camps arguing:  The "I don't believe it, show 
me" camp versus the "Yes we can" camp.  
Both these camps hold justified, incorrigibly held beliefs.  It's just a matter 
of choosing sides at this point.  


I know which side I'm on.


Cheers,


~PM.
   


--------------------------------------------------------------------------------
Date: Sun, 17 Jun 2012 20:20:04 -0400
Subject: Re: [agi] Real World Reasoning
From: [email protected]
To: [email protected]


On Sun, Jun 17, 2012 at 6:43 PM, Ben Goertzel <[email protected]> wrote
If you knew more about real-world uses of logic systems, you would
know that **inference control** doesn't have to be done by logical
mechanisms.... The choice of which premises to explore in a logical
inference chain, can be done by lots of methods besides logic. That
is, in a real-world reasoning context, logical inference will
generally be nudged and guided in the right direction by non-logical
methods...
----------------------------------------------------------
But, the fact that many of these actions can be said to be logical decisions, 
given the evidence that the program is working with at the time, is not just a 
coincidence.  

The original theory that logic was the highest form of reasoning, the kind of 
reasoning that scientists do, looks like it is pretty much a thing of the past 
now.  However, the fact that our usual methods of reasoning can be described by 
*form* shows that this method of logical reasoning or reasoning by form is not 
something that can be dismissed.

When we try to anticipate something that is too far out in the future our 
predictions about the event can be pretty awful.  (People who talk about using 
"prediction" in AGI are people who have never actually written out their 
"predictions" to see if they can actually use predictions in life.  I think the 
term "prediction" in AGI just refers to something that is known.)  If our 
predictions about what, precisely, is going to happen during the next month is 
as bad as they usually are, it should not be much of a surprise to discover 
that our *forms* or formal categorical methods that an AGI program could use to 
deal with might happen in the next month might be a little off as well.

However, to say that a reasonable method that is used to "generally nudge and 
guide logical inference in the right direction," are not the products of logic 
is a little dubious.  An educated guess is one that is based on logical use of 
insight - although the logic may be hidden.

Jim BRomer


 
On Sun, Jun 17, 2012 at 6:43 PM, Ben Goertzel <[email protected]> wrote:

  On Sun, Jun 17, 2012 at 2:11 AM, Mike Tintner <[email protected]> 
wrote:

  > How do you get to A): ?
  >
  >
  > A)
  > Two people in a big crowded space are unlikely to notice each other
  >
  > from:
  >
  > "Sue and Jane were both at the clinic at 4.00 - did they see each other?"
  >
  > How do you know to ask questions about the clinic and Sue and Jane and
  > seeing?
  >
  > Please outline the **logical** principles  - esp. those you think existed in
  > your head about "crowded spaces", "people" and "seeing."



  If you knew more about real-world uses of logic systems, you would
  know that **inference control** doesn't have to be done by logical
  mechanisms....  The choice of which premises to explore in a logical
  inference chain, can be done by lots of methods besides logic.  That
  is, in a real-world reasoning context, logical inference will
  generally be nudged and guided in the right direction by non-logical
  methods...

  In this case, a simple lookup into episodic memory would probably do
  the trick...

  If the system's memory contained many cases of people in the same
  place who did see each other ,and also many cases of people in the
  same place who did not see each other...

  THEN, a supervised learning method like MOSES could be automatically
  launched inside the system, to learn which patterns distinguish the
  "did see" cases from the "didn't see" cases...

  One of these patterns might be: if the people were in a place that is
  both large and crowded, they often did not see each other...

  This pattern, derived via inductive pattern-recognition from a set of
  remembered instances, would then guide logical inference...

  Note that a mind can try out 10000s of possible logical inferences
  very quickly, in parallel, until it finds one that seems to yield
  useful information about the subject at hand...

  Using an internal simulation-world, as you suggest, would be one
  possible way to solve the problem you mention.  However, there are
  many other ways a mind could solve it, and I've described one:
  uncertain logical inference, with inference control guided by
  supervised learning acting on declarative episodic memory...


  -- Ben G



      AGI | Archives  | Modify Your Subscription  

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to