Jim,
I'd submit to you that your system have two loops: a reaction loop to handle 
familiar recurring situations, and a deliberation loop to be invoked when 
habitual or reflex reactions become insufficient (i.e., fail to achieve their 
desired outcomes beyond some threshold of the time--e.g., failing 30% of the 
time)). 

One should seriously take a look at Apple's SIRI since a system like that may 
evolve into an AGI if it is 
equipped with sufficient back end services (i.e., actions).  It has a reasoner 
which I presume respondsto user requests in a rule based or case based manner 
and is tied into various service providers.
Cheers!
~PM
Date: Fri, 14 Sep 2012 10:06:17 -0400
Subject: [agi] Simplistic Test of Reason-Based Reasoning
From: [email protected]
To: [email protected]



I was wondering if a simple system of reason based
reasoning could be used to start an expanding system of knowledge
acquisition.  I am not talking about a
human-level AGI program.  I am talking
about a very simple, very artificial system to test the viability and the
flexibility of the reason-based reasoning strategy for general learning.



Reason-based reasoning is just a strategy in which
analysis and response to a situation is based on reasons which the AGI program
can access.  In some ways this makes a
great deal of sense and it is almost impossible to understand why this idea has
not gained traction in AI discussions. 
In another sense this method may be a little more complicated than it
seems because it requires the AGI program to integrate knowledge in ways that I
don't fully understand and it can act as an obstruction to making efficient
decisions and reactions.  As our insights
become better developed we become more adept to reacting to the situations for
which the insight is relevant without really thinking of all of the reasons we
react the way we do.  This is part of how
habits are formed and as best I can tell, part of the reason that we can react 
to
situations as quickly as we can is because we can respond effectively to
familiar situations without considering all the reasons why our reactions should
work.  As we are learning, our reactions
have to be tailored with reasons for making decisions, but once we learn to
recognize a situation we seem to react without having to focus on all of the
reasons why we should make one decision or another. 
Obviously this doesn't always work, but it works well enough most of the
time to make it look spectacular from my perspective.  Of course, even with 
expertise we are still
looking for the reasons we should react in certain ways but our focus seems to
be on a more sophisticated level than it had been at an earlier stage of
learning.



So my question is whether or not reason-based
reasoning can be used effectively in a simplistic system to enable the program
to make good reactions based on what it had learned.  But I do not fully 
understand how human
beings are able to adeptly recognize and react to complicated situations. 



Analysis and reactions do not only act on some
form of output.  They can govern the
analysis and reaction modes as well.  One
issue is how much a reaction to a particular situation should affect a
previously learned analytical or reactive method.  You would not want a system 
to forget
everything it ever learned in response to a situation but you do want the
program to learn how to improve previously acquired reaction and analytical
methods.  One of the issues that I am
aware of is that insights are almost always tied to the generality level of a
subject matter and this idea of a generality level also applies to analytical
and reactive methods as well.  For
example, a general modification of reactive methods might be applied temporarily
at a global level.  This implies that a
global reaction might impact a broad variety of analytical and reactive 
methods.  This in turn implies that these methods can
be modified by other methods that are not directly embedded into the
reaction.  I can go on and on about this
but no one has yet shown much interest in my thoughts about this issue.



One problem that I do not completely understand is
how concepts are integrated. 
Reason-based reasoning will help but it does not explain
everything.  I am thinking about starting
with a primitive artificial language to make the program work a little like a
programming method.  However, with
reason-based reasoning that is able to act on recognition and reaction methods 
there
is no reason why I could not experiment with language acquisition.

This shows that the idea I am talking about is
something that is clearly different from the old narrower AI methods, like
expert systems.  However, while I think
that this idea could work to enable the program to gain general knowledge, I am
not saying that it would be anything near human-level reasoning.  I am just 
saying that if a simplistic method
might be able to gain some low level traction for general reasoning in novel 
ways
then I could have a better base to conduct some experiments into more 
complicated
problems.  I am not sure if I am going to
try this or not but it certainly seems
interesting to me right now.



Jim Bromer





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to