I need to start with something so simple that I get to it and report on it.

 A simple AGI program could be programmed to be able to learn to detect those 
responses that indicate right and wrong and to learn that it can act in ways 
which may elicit particular (kinds of) human responses.  (This does not have to 
be perfect.)

 From there it may be able to derive the awareness that some interactions with 
humans are intended to announce that certain references should be (or could be) 
used with certain behaviors in a way based on previous experiences it had with 
the human user. 

 Situations that may be very complex for a child may sometimes be dealt with by 
choosing some sequence of processes that it had used in simpler circumstances.  
Similarly, a simple AGI program might be designed so that it makes an attempt 
to solve complicated situations by trying different sequences of methods which 
worked separately in simple situations. Then, as it has some success with more 
complicated situations, it can use those methods that worked with complicated 
situations as well.  One of the key actions has to be based on the awareness 
that this complicated behavior must rely on a combination of declarative 
knowledge and on procedural knowledge.  So it has to have to have a potential 
to ‘think’ about how it might use both declarative knowledge and procedural 
knowledge to learn something.

 My point is that a simple initial feasibility test may be designed around this 
format as a means of designing a way for a computer program to learn direction 
from a human user so that it can further discover interesting ways to acquire 
structured knowledge without that direction being reduced to some inflexible 
programming instructions or tending toward a steady state of program-like 
specificity.  So it can acquire very specific kinds of knowledge and insights 
about how it can ‘act’ to learn more (or otherwise work with knowledge) without 
forgoing all potential for individual creativity.   Initial ‘reinforcement’ 
does not have to lead to canned responses.
 
Jim Bromer
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to