Thanks for the comments. I now agree that a bootstrap program is what is
needed at this stage.  Bootstrap can mean more than one thing. The sense
that I have is that while the program has to be able to acquire greater
sophistication after starting at a low level, I have to start with
something that is simple which means that I will need to be able to help
the program.  I was thinking of making some of its internal reasoning
available (on the screen) and I am also thinking of using the mouse to
point and right click on different programmed choices to get the program to
focus on what I want it to focus on.  This would be very primitive but I
think it might be effective. I am planning on using a text based IO but the
addition of the ongoing abstract of its internal attempts at reasoning and
the point and click method to get it to attend to ideas that I want it to
focus on would help.  It's not supposed to be human level AGI but more like
a way for me to experiment with different kinds of ideas that I have about
AGI.  And it will not just be using logic-like discreet reasoning or
weighted reasoning, it will also be using structured reasoning which is
based on my sense that reasoning is more than funnelling all decision
processes into one resultant.  (Although other advanced AI/AGI methods do
not do funnel all results into one resultant, their exaggerated reliance on
funnel reasoning (logic, weighted reasoning and so on) means
that too many of their methods end up combining resultants in an heavy
unnatural way.)

I am not programming with rules because I specifically want to test the
theory of reason-based reasoning.  Although this is not an idea that is
simple (like using funnel reasoning) it is something that I think is
necessary and fresh enough to hold some promise.

All these ideas are simple enough to put them within the realm of the
feasible. If the program can't get any traction, I can make the program a
little more reliant on my pointing to the "ideas" I want it to use in its
internal attempts at reasoning. So feasibilitywith general (genuine)
learning are my goals.

Jim Bromer



On Fri, Sep 14, 2012 at 8:42 PM, Stan Nilsen <[email protected]> wrote:

> Greetings Jim.
>   my reaction to a couple statements contained below -
>
>
>
> On 09/14/2012 08:06 AM, Jim Bromer wrote:
>
>>
>>
>> I was wondering if a simple system of reason based reasoning could be
>> used to start an expanding system of knowledge acquisition. I am not
>> talking about a human-level AGI program. I am talking about a very
>> simple, very artificial system to test the viability and the flexibility
>> of the reason-based reasoning strategy for general learning.
>>
>
> stan:  I like the beginning line, especially the word "simple."  A
> bootstrap kind of program is exactly what is needed.
>
> To stick with the simple description, I would avoid the word "reason" and
> put in "rule." e.g. can a simple rule based system start an expanding
> system - expanding by virtue of it's acquiring facts and rules. More about
> this below.
>
>
>> Reason-based reasoning is just a strategy in which analysis and response
>> to a situation is based on reasons which the AGI program can access.
>>
>
> rules no?
>
> >  .... <section removed >
>
>
>> So my question is whether or not reason-based reasoning can be used
>> effectively in a simplistic system to enable the program to make good
>> reactions based on what it had learned. But I do not fully understand
>> how human beings are able to adeptly recognize and react to complicated
>> situations.
>>
>
> stan:  Would it be fair to rephrase this question?  Something like " can
> one build a simple rule based system that will make good choices based on
> rules and facts it has acquired?"
>
> If so, it seems obvious that the system can only operate on rules and
> facts acquired.
>
> The bigger question is can the system take a fact or rule gathered from
> it's experience in the environment and place it where it's rule processor
> finds it at the right time? What are the rules that tell the system how to
> utilize this 'mined' knowledge?
>
> simple huh?
>
>
>
>> Analysis and reactions do not only act on some form of output. They can
>> govern the analysis and reaction modes as well.
>>
>
> stan:  True, the "output" could simply be a state change in the analyzer.
>  This often is the case when writing a program - branch on a condition...
>
> ... portions removed
>
>
>> One problem that I do not completely understand is how concepts are
>> integrated. Reason-based reasoning will help but it does not explain
>> everything. I am thinking about starting with a primitive artificial
>> language to make the program work a little like a programming method.
>> However, with reason-based reasoning that is able to act on recognition
>> and reaction methods there is no reason why I could not experiment with
>> language acquisition.
>>
>>  stan:  If the goal is to understand how concepts are integrated, put on
> the designer hat and say "how would I design a system that can integrate
> concepts?"  What do the players look like?
> "Concepts" is a bit nebulous to me.  More concrete might be something like
> " how are rules and facts integrated in the system I am designing?"
>
> take care...
> Stan
>
>
>
> ------------------------------**-------------
> AGI
> Archives: 
> https://www.listbox.com/**member/archive/303/=now<https://www.listbox.com/member/archive/303/=now>
> RSS Feed: https://www.listbox.com/**member/archive/rss/303/**
> 10561250-164650b2<https://www.listbox.com/member/archive/rss/303/10561250-164650b2>
> Modify Your Subscription: https://www.listbox.com/**
> member/?&id_**secret=10561250-2cb1ec2f<https://www.listbox.com/member/?&;>
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to