Two notes.
#1 - I was taught that a prediction and an explanation are the same entity: one 
is viewed  prospectively whilst the other is viewed retrospectively.  
#2 - Jean Piaget calls the process you discovered  regulation: the 
reinforcement or correction of a behavior or prediction.  Regulation is both    
    success driven and failure driven.  For a failed prediction or behavior, we 
synthesize new element in our mind, and ascribe causes        or "reasons" for 
the failure, then we incorporate the new element in our original behavior or 
prediction.  Often we call the new element a        "problem" or "impediment", 
whichever name you give it does not matter.   Hammond in his seminal book on 
case based reasoning talked        about problem synthesis, ascription, and 
anticipation.  Success is also a driver because it subtly reinforces our 
thinking, giving confidence        to our behaviors and predictions, making us 
more likely to use them again in future.  
Okay.  Now back to the problem at hand.  Making that alternative matrix for an 
internal representation given our requirements.  
Who wants to go first. 
~PM.
Date: Mon, 10 Dec 2012 00:14:00 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]
To: [email protected]

Sorry for changing the subject a little.  A few years ago, I became so tired of 
hearing people talking about using "prediction" as a basis for AGI I decided to 
do an experiment with making personal predictions.  At first my predictions 
were so bad that I quickly had to adapt them and make less precise predictions 
that were based on things that seemed more likely to me.  So I was able to use 
predictions but I found that I also used explanations to explain something that 
happened and I also was absolutely dependent on something that was not 
predictive or explanatory at all.  I could, for my experiment, privately write 
some predictions and some explanations about people that I was writing to in 
one of these groups, but when I was forming the subject matter for the messages 
that I was sending to them I was not aware that I was using many predictions.  
If someone was to help me try to find the 'predictions' or 'explanations' that 
I made in this message for example most of them would be almost incidental to 
the writing.  There are some explanations which are central to this message, 
but most of the 'predictions' are incidental.  They are not used for 
verification but if some expectation was shattered they would be used for 
adjusting my expectations.  
 The concept of predictions in AI was originally introduced as a means of 
verification of theories about the world but because there are so many routes 
that misunderstanding can take the dependency of verification on simple 
prediction is relatively weak because it can only be done well by a highly 
intelligent being, it cannot be used as the substantial underlying basis for 
that intelligence.  
 However, I found something that I think was much more interesting in my 
subjective experiment.  What I found was that by formally writing a prediction 
(or an explanation) about someone I became enlightened, or at least I found 
enlightenment soon after I wrote my theory down.  By writing something down 
(privately) I found that the commitment to the prediction or explanation 
somehow forced me to subsequently refine it to make it more likely and more 
useful. As I tested the theory I found that I was able to develop better 
insight about people. Part of it was that I was keeping a record about my 
thoughts and as I refined my theories about a person I was able to talk to them 
in a way that they found more interesting. But part of it was that after I made 
a commitment to the theory and because I was able to test my theories (using 
conversation) my theories about the people I was talking to were based on 
better insights that I formed serendipitously.
 I found that my predictions or explanations were very difficult to verify, but 
as I tried I was able to learn things about the person that were peripheral to 
the original prediction.  Isn't that what happens in these groups?  I haven't 
yet learned how to write an actual AGI program but I have learned a great deal 
that is peripheral to that aim.
 So by making the commitment to a theory (or opinion) and writing it down and 
following up on by trying to test the theory in some way, I found that I was 
rarely able to use a novel 'prediction' as verification. I was, however, able 
to learn a great deal about the subject of the theory that was peripheral to 
the original prediction.
 Jim Bromer 
 On Sun, Dec 9, 2012 at 12:20 PM, Piaget Modeler <[email protected]> 
wrote:





We'll prefer "Explanation" to "Justification" to avoid any undesired 
connotations.
Let's take the next step.  We need to make an alternative matrix table with our 
requirements as the rows and the potential solution as the columns.
Initially each column should address one requirement so the matrix would be 
NxN. But then we would combine the alternatives and collapse the columns 
somewhat.  Who would like to begin this? 

~PM.

Date: Sun, 9 Dec 2012 10:06:04 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]

To: [email protected]

I don't want to quibble but I find that the word 'justification', like the word 
'prediction', when used in these discussions usually sound exaggerated. 
Explanation is way of expressing your reasons.  You may explain your reasons 
for having done something while realizing that they might not be great reasons 
and you may even realize that you might have been influenced by sub-conscious 
motivations.  But yes, since reasons are sub-verbal and/or subconscious we are 
not always able to find or express our reasons for doing something.  And since 
we may not be aware of the effects of our own actions we may not always be 
aware of what we are or were doing!



On Sat, Dec 8, 2012 at 11:20 PM, Piaget Modeler <[email protected]> 
wrote:







Is that because the justifications are sub-verbal / subconscious? 

Date: Sat, 8 Dec 2012 19:58:11 -0500
Subject: Re: [agi] Internal Representation

From: [email protected]

To: [email protected]

Humans cannot always give good reasons for their decisions, but in many cases 
they can. So if a person makes a decision based on a reason, or if they can 
rediscover the reasons underlying a habituated response, then they should be 
able to describe something about those reasons.




On Sat, Dec 8, 2012 at 4:40 PM, Piaget Modeler <[email protected]> 
wrote:







Once we have a good set of requirements we can begin a design by finding 
designelements that match the requirements, assumptions, dependencies, and 
constraints.




Requirements:
1. Efficient Organization2. Efficient Execution3. Easy to Understand4. Supports 
vast interconnections among concepts


5. Rapid Execution to continually reevaluate multiple paths6. Need to switch 
search spaces rapidly7. Need to expand or restrict search spaces dynamically8. 
Need for concept integration 


9. Need for concept differentiation10. Need to fluidly recombine concepts11. 
Need to support simulation and multiple path exploration.12. Supports 
explanation discovery / Reason-based  reasoning and planning. 


13. Should be able to explain reasons for decisions.

Assumptions

Dependencies




Constraints


Anything else to add? Are any of these Assumptions or constraints rather than 
requirements? 



Kindly advise.
~PM.





                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to