One would definitely want to include encapsulation and abstraction as 
requirements of an AGI system.Otherwise it would be overwhelmed during search.
~PM.

Date: Thu, 6 Dec 2012 10:06:30 -0600
Subject: Re: [agi] Internal Representation
From: [email protected]
To: [email protected]

I've pointed to the Encapsulation concept from Object Oriented paradigm before 
as a means for limiting complexity. Packing up highly interrelated subsystems 
recursively and extracting out summaries of important/relevant expected 
behavior for each system expressed independently of the details of the 
underlying processes that lead to that behavior, allows for the search space to 
be fragmented automatically based on degree of connectivity. The expected 
behavior of the system can be summarized by running a simulation-based search 
among the interactions of only its subsystems, given the events that might 
affect the system. It is a means for automatically generating effective 
heuristics which functions independently of the knowledge domain. A summary is 
less complex, by definition, and so working with summaries reduces the 
combinatorial explosion to something more manageable at the price of sometimes 
missing seemingly insignificant details.

This also explains why we think in terms of objects in the first place. There 
is no reason to think that just because we draw a line around a set of 
phenomena and call them an object or a kind of object, that the instance or 
type we have isolated is necessarily an inherently distinct thing in reality. 
The bounds we draw around those phenomena (encapsulating them) are in our 
minds, not in reality, but they are difficult to recognize as such because 
they're the very things we use for recognition. Why would we add this new 
information to a system which is already overwhelmed by information? To 
organize that information flood into something manageable.



On Thu, Dec 6, 2012 at 6:08 AM, Jim Bromer <[email protected]> wrote:

Piaget Modeler <[email protected]> wrote:

Integration is combining different concepts via their attributes akin to 
crossover or memetic recombination. 
...I think complexity can be an issue if you have one huge problem and one huge 
model. Search spaces become ridiculous very fast.  However, if you engineer a 
system so 
that you are not dealing with one large problem but lots of small problems, I 
think its
feasible to engineer solutions to manage the complexity of each problem 
reasonably. -------------------------------
 
The wikipedia definition of memetics was interesting.  Assuming that I can make 
a pretty good guess about how your idea of memetic recombination might work, I 
would say that your imagined usage of the method has some serious problems.  
First, a meme cannot be modelled in the same way a superficial data string can 
be so the comparison of memetic algorithms to recombination in genetic 
algorithms seems fanciful.  Secondly the idea that the attributes of a concept 
might be clearly differentiated in an automated system that is able to learn 
and then used to clearly integrate different ideas seems unlikely.  I do not 
think the concept is impossible, I think that it is complicated.  It is a 
problem of complexity.  You mentioned that you thought you can avoid complexity 
by using many small search problems.  Although I cannot point to this or that 
study which can drive this point home, I do feel that there is ample evidence 
that domain restricted learning has not worked in AI just because we need to 
use concepts outside of the domain in order to understand those concepts which 
are strongly within the domain.  (By the way, here is where an imagined 
efficiency of using weighted evaluations can really turn to nonsense. You can't 
eliminate the need to look outside the domain to determine meaning or relevance 
just by putting a numerical value on how much a meme belongs to a particular 
domain.) 

 So to simplify this (so it does not sound like the product of a chat bot to 
you), a fully automated system cannot be expected to reliably determine the 
attributes of a concept (or meme) in order to use them to flawlessly integrate 
concepts.  So assuming that your program would be able to achieve some viable 
integration one has to assume that the integration would be very rough.  Your 
program might be able to make a crude haphazard matrix of relations between the 
concepts but how would it continue to refine this process without encountering 
some serious complexity?

 Here is where the hollow functional identity argument (as I called it) turns 
to delusion.  'If the program could do some crude thinking then it would be 
able to use the same processes to do more sophisticated thinking.' This is a 
philosophy that many AI programmers live by.  Is it reality?  We all know from 
personal experience and observation that there is a limit to intellectual 
ability and those limits have something to do with complexity.  To believe that 
you can avoid complexity just by keeping the search space to a minimum doesn't 
sound insightful.  Does that method work for you personally?  Of course not.  
It is obvious that you have been trying to find methods for your program by 
broadening the search space of your studies.  This is just the opposite of what 
you formally declared was a solution to complexity.  It is an obvious 
contradiction.  When you can't figure something out you sometimes expand the 
search area and you sometimes limit the search area.  These method don't always 
work but they are important steps in learning and trying to find solutions.

 Jim Bromer  
 On Wed, Dec 5, 2012 at 12:18 PM, Piaget Modeler <[email protected]> 
wrote:







Jim, your prior e-mail reads like you are either a chatbot or are attempting 
NLP (Neuro-Linguistic Programming) or DHE.
Just ask, my answer may be yes or no.  My own reason for assisting is that I'd 
like you to understand my approach.


Differentiation IS conditional branching.  Observation is receiving sensory 
stimuli.  Coordination means making inferences.Integration is combining 
different concepts via their attributes akin to crossover or memetic 
recombination. 


Please define verification.  It may be what I call correlation.
Cheers!
Imagine, NLP via e-mail.  Whooda thunk it? 




Date: Wed, 5 Dec 2012 07:54:16 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]
To: [email protected]



I agree with Piaget's remark. I am going to conduct an experiment.  I want to 
see if I can get you to solve a problem for me.  So I am going to keep track of 
our conversation by keeping notes on particular issues related to this 
experiment.  It is unlikely that you would be able to solve a particular 
problem that is of interest to me, so I am going to be looking for an 
unexpected solution to some related problem that I will pick up somewhat 
serendipitously from our conversation.  The best way to get you to cooperate 
with me on this is to get you talk about the thing you are interested in. 
However, the solutions to the problems of your projects probably will not be 
the solutions to the problems of my projects, so I have to find a way to get 
you talk about something that is common to both of our projects.



So I have gotten you to describe some ways that your program can apply 
imagination to problem solving.  Your seem to acknowledge that integration is a 
part of the process, but you haven't acknowledged that complexity is a problem. 
 So now, in order to get you to continue discussing this I have to back off 
from talking about complexity and emphasize the problems of 'verifying' and 
integrating internal projections.  I will review your message in response to my 
question of how your program will use imagination, and I will copy that 
response into my notes.  Now that I have reviewed some of your previous 
messages I see that you mentioned Piaget's comments on coordination before.  
Coordination seems to be very similar to conceptual integration.  I also found 
that you had told me that Michalski had a fast inferencing method so that must 
be important to you for some reason.


 So, to repeat myself for clarity.  I am going to run a subjective experiment 
for a couple of weeks. The goal is to get you to solve a problem for me and I 
want to be able to note how I personally integrate subject related serendipity 
into my knowledge structures concerning the subject. It is unlikely that you 
would be able to solve a problem that I specified in advance so I am going to 
look for an unexpected serendipitous solution to some problem that I haven't 
yet completely identified.  In order to get you to participate in this 
experiment I need to encourage you to talk about your project using terms that 
are relevant to both of us.  Since I will be keeping notes I have started by 
reviewing and collecting some of the comments you made in this thread.  I can 
then use this knowledge to get you to continue talking about things that 
interest you.  I noted that you have not acknowledged that complexity is a 
problem so I will back off that particular problem and try to shift to 
integration (coordination) issues that seem challenging for an automated AGI 
program to use effectively.  Now that I have explained this 'experiment' to you 
I will stop talking about it and get back to the subject.


  On the list of mental coordination methods, internal simulation methods and 
inferencing you did not specifically mention conditional branching so there is 
a chance that you (or Piaget) left that off the list. I would say that is a 
pretty important concept!  On the other hand, running different methods to use 
in a comparison with perceived events seems to imply a conditional branching.


  Anyway, the next question I have for you concerns 'verification' and 
integration (coordination).  Without strong verification, coordination is 
essentially going to tie weak inferences together. If you accept that this 
could be a problem then how would your program use the products of coordination 
reliably?


 Jim Bromer
 On Tue, Dec 4, 2012 at 11:45 PM, Piaget Modeler <[email protected]> 
wrote:







"The
central idea is that knowledge proceeds neither solely from the experience of 
objects nor from 
an innate programming performed in the subject, but from
successive constructions, the result of 
constant development of new
structures.”   ~ Jean Piaget


So I think we knit together these insights, piecemeal, until they recur and 
strengthen, and become
more predictable and forceful in our minds.  Then they integrate and form a 
larger structure, and 



eventually they become a subsystem, integrating with other subsystems, until 
they finally integrate
with the totality.


Or at least that's how I interpreted it in "The Development of Thought" by 
J.Piaget.





Cheers.


~PM.


Date: Tue, 4 Dec 2012 23:12:06 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]



To: [email protected]

Well, I would look at Ryszard Michalski's work on dynamically interlaced 
hierarchies if it was convenient for me to do so. Nothing about this is 
mentioned on his home page and the first reference I looked at did not seem 
like a breakthrough paper.



 I want to finish something that I was thinking about.  We (or a machine) would 
be able to build strong knowledge if the knowledge that was gained could be 
used to reliably predict, explain or produce a specific outcome.  But often, 
the outcomes are weak or unreliable indicators of much of value.  So instead we 
are left with a lot of weakly related situation-action-reaction insights that 
are inexplicably conditional and variant.



 This is a lot like serendipitous learning.  If I try to learn something, I 
probably won't be able to figure out what I wanted to figure out (unless it is 
something that other people had already figured out and it was within my field 
of knowledge).  But I would probably learn something new serendipitously.  Now 
can we patch a lot of weak unexpected insights together?  Yes, but in order to 
build something reliable out of a lot of weak structural pieces they have to be 
integrated pretty thoroughly. The integration does not have to perfect but the 
matrix of these things have to be strong enough to serve as a foundation for 
greater insights.



 Jim Bromer

On Tue, Dec 4, 2012 at 9:31 PM, Piaget Modeler <[email protected]> 
wrote:









I would agree that you also need mult-strategy reasoning in addition to 
correlations.  
Look at Rysard Michalski's work on dynamically interlaced hierarchies. He has a 
fast and efficient mechanism for inference.  He inspired me.




Cheers,
~PM.


Date: Tue, 4 Dec 2012 18:36:20 -0500
Subject: Re: [agi] Internal Representation
From: [email protected]




To: [email protected]

I discovered something about logic that I never knew before.  It is something 
that I have thought about for 40 years, but I never stopped to explore the 
application.  Now, shouldn't this new insight give me greater understanding?  
Well, yeah, but it doesn't work that way.  I have a new insight but I haven't 
got any use for it.  So now I have to try to find some practical use for it.  
Well even though I don't have any use for it, I might pick up some street creds 
by telling other people about it right?  Well no, not really.  It is really a 
turn-the-crank kind of thing and the fact that I thought about it for so long 
without ever once examining its application is kind of embarrassing.  So now, 
before I can talk about it I have to search for some way to use the idea 
effectively.  If I found some utility for it then I could pick up some credit 
for it, but until then it is just going to make my work with logic more 
complicated.




 The insight was a turn-the-crank kind of insight so it represented the 
application of a familiar idea onto another familiar idea in a way that was 
very familiar to me.  The only thing I did different was to actually see how it 
worked in a few examples.  When I did that I realized that the effects were not 
exactly what I expected.  However, logic is an artificial field which is well 
formed so that other logic-based ideas, like something from mathematics, can 
sometimes be easily integrated into it.  In real world examples of ideative 
projection, the analysis of turn-the-crank imagination cannot easily be 
achieved just by using other (integrated or related) methods of internal 
ideative projection.  And as I just explained, simple correlation methods are 
not an easy substitute for insightful methods. 




 Jim Bromer  



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to