"The
central idea is that knowledge proceeds neither solely from the experience of 
objects nor from an innate programming performed in the subject, but from
successive constructions, the result of constant development of new
structures.”   ~ Jean Piaget
So I think we knit together these insights, piecemeal, until they recur and 
strengthen, and becomemore predictable and forceful in our minds.  Then they 
integrate and form a larger structure, and eventually they become a subsystem, 
integrating with other subsystems, until they finally integratewith the 
totality.
Or at least that's how I interpreted it in "The Development of Thought" by 
J.Piaget.
Cheers.
~PM.
Date: Tue, 4 Dec 2012 23:12:06 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]
To: [email protected]

Well, I would look at Ryszard Michalski's work on dynamically interlaced 
hierarchies if it was convenient for me to do so. Nothing about this is 
mentioned on his home page and the first reference I looked at did not seem 
like a breakthrough paper.
 I want to finish something that I was thinking about.  We (or a machine) would 
be able to build strong knowledge if the knowledge that was gained could be 
used to reliably predict, explain or produce a specific outcome.  But often, 
the outcomes are weak or unreliable indicators of much of value.  So instead we 
are left with a lot of weakly related situation-action-reaction insights that 
are inexplicably conditional and variant.
 This is a lot like serendipitous learning.  If I try to learn something, I 
probably won't be able to figure out what I wanted to figure out (unless it is 
something that other people had already figured out and it was within my field 
of knowledge).  But I would probably learn something new serendipitously.  Now 
can we patch a lot of weak unexpected insights together?  Yes, but in order to 
build something reliable out of a lot of weak structural pieces they have to be 
integrated pretty thoroughly. The integration does not have to perfect but the 
matrix of these things have to be strong enough to serve as a foundation for 
greater insights.
 Jim Bromer

On Tue, Dec 4, 2012 at 9:31 PM, Piaget Modeler <[email protected]> 
wrote:






I would agree that you also need mult-strategy reasoning in addition to 
correlations.  
Look at Rysard Michalski's work on dynamically interlaced hierarchies. He has a 
fast and efficient mechanism for inference.  He inspired me.

Cheers,
~PM.


Date: Tue, 4 Dec 2012 18:36:20 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]

To: [email protected]

I discovered something about logic that I never knew before.  It is something 
that I have thought about for 40 years, but I never stopped to explore the 
application.  Now, shouldn't this new insight give me greater understanding?  
Well, yeah, but it doesn't work that way.  I have a new insight but I haven't 
got any use for it.  So now I have to try to find some practical use for it.  
Well even though I don't have any use for it, I might pick up some street creds 
by telling other people about it right?  Well no, not really.  It is really a 
turn-the-crank kind of thing and the fact that I thought about it for so long 
without ever once examining its application is kind of embarrassing.  So now, 
before I can talk about it I have to search for some way to use the idea 
effectively.  If I found some utility for it then I could pick up some credit 
for it, but until then it is just going to make my work with logic more 
complicated.

 The insight was a turn-the-crank kind of insight so it represented the 
application of a familiar idea onto another familiar idea in a way that was 
very familiar to me.  The only thing I did different was to actually see how it 
worked in a few examples.  When I did that I realized that the effects were not 
exactly what I expected.  However, logic is an artificial field which is well 
formed so that other logic-based ideas, like something from mathematics, can 
sometimes be easily integrated into it.  In real world examples of ideative 
projection, the analysis of turn-the-crank imagination cannot easily be 
achieved just by using other (integrated or related) methods of internal 
ideative projection.  And as I just explained, simple correlation methods are 
not an easy substitute for insightful methods. 

 Jim Bromer  



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to