I agree that goal directed behavior is at the top of the pyramid of 
functionality, with perception (observation) and categorization (coordination) 
at the base.
I disagree because  I do believe Babies plan.  I also believe babies featurize 
their environment.  I think they use the features to plan.  One basic plan that 
babies employ to achieve their goals is called "Cry".  Babies use that plan in 
numerous situations, and possibly when all other plans fail. 
My question was different. It was, given a grand number of features available 
at any given point, how do we choose the most "relevant" ones in order to 
formulate the current state.  Or in the case of a planner, an initial state. 
A few approaches exist:  (1) Pick a few features at random; (2) sort the 
features in some way and pick the first N features; (3) pick no features, and 
instead plan using an empty initial state and a goal state; (4) see if any 
correlation exists (temporal concurrence, temporal sequence, etc.) between 
features and the goal, if so, pick the initial state features based on this 
correlation.  Doubtless there are other possible answers as well. 
Perhaps experimentation with all these approaches should be done to determine 
which works best. 
~PM
Date: Sun, 29 Dec 2013 22:54:58 -0600
Subject: Re: [agi] Problem [Space] formulation
From: [email protected]
To: [email protected]

First comes awareness, then comes play, then comes planning. Babies don't plan 
anything. They just observe and correlate. They pick out salient features based 
on what allows them to predict the most about future observations, particularly 
those relating to reward or suffering. (Human faces and voices, among other 
things, are probably recognized a priori as salient through hardwired 
mechanisms.) Play allows the non-goal directed testing of observed correlations 
between salient features. Planning can only take place after you have started 
to recognize that salient feature X leads to salient feature Y, which is 
associated with reward or relief from/avoidance of suffering. Build the 
foundation first, in the form of a salient feature recognition engine -- an 
autoencoder* or some type of autocorrelation technique -- and then implement 
your planning engine on top of that. Goal directed behavior is at the top of a 
pyramid of functionality with perception and categorization at the base.

*Not sure if you're familiar with autoencoders. They are a technique used for 
deep learning, among other things. The idea is to teach a pair of neural 
networks or other classification-learning algorithms to learn a compressed, 
(nearly) lossless encoding, which by necessity pulls out the most salient 
features of the input, and then to use the compressed representation as the 
input to the next layer, or in your case, the input to the planning engine. 
(The decoder is typically thrown away once the encoder/decoder pair has been 
trained.) It's similar in function to principal component analysis, which could 
also possibly serve your purposes here.


On Sat, Dec 28, 2013 at 11:14 AM, Piaget Modeler <[email protected]> 
wrote:




Thanks Jim,
Much to think about...
~PM

Date: Sat, 28 Dec 2013 08:33:22 -0500
Subject: Re: [agi] Problem [Space] formulation
From: [email protected]

To: [email protected]

You could, for example, try to write a planner for solving the problem of 
selecting the relevant factors to form an initial state (and all the other 
possible states) for your lower solver to work with. To do this you would 
probably have to use some kind of abstractions.  Since you want it to be based 
on solid scientific methods, you might try something like, material objects, 
properties and relations between objects.  This you might find would not work 
so well since you want an AGI program to deal with imaginary things like 
fantasies and creative solutions to problems.  So then you might start by 
working with something like ideas, concepts and conceptual relations.  However, 
this does not solve the representation problem so then you have to include 
something about representations.  However, you are back to the earlier problem, 
you cannot 'represent' every thing that you can think about so you might try to 
find a slightly better word-concept to use, like 'symbols' or 'references'.  
Even though you cannot 'represent' everything there is about the universe, for 
example, you can still refer to it. Since you will need a way to represent the 
symbolic references you might try to rely on grammatical primitives like 
'nouns' and 'verbs'.  This however implies that every concept-situation you 
might want your solver to represent could be represented with individual words. 
 This does not quite make sense so you might then use actions and objects as 
part of your native abstractions for the solver to use in solving the problem 
of building a state system (or other kind of system) for a higher level (or 
more primitive) solver to solve.  And you might try to think about using 
something like computational syntax instead of linguistic syntax, since, your 
solver is going to be a computational program and it will act as if it 
operating on computational strings anyway.  These ideas however, still do not 
solve the problem because it is much too abstract and too full of gaps.  And 
while the abstractions may seem like they are ideal for representing any 
possible situation, it turns out they are not because they are themselves 
possible states of thought.  So, for another example, can you have a 
representation of something that cannot be represented?  Well yes, since you 
can represent a reference to it.  But that means that your solver has to be on 
guard against getting caught into illogical representations that refer to 
things that cannot be referred to in the way your abstract program is going to 
try to refer to them.  While the comprehension and representation of the entire 
universe (for example) may not represent much of a problem to anyone other than 
a fool, it does show how problems of symbolic representations and references 
can creep into your solver when you start with higher abstractions.  So now you 
need to add an incremental trial and error method to detect such possibilities. 
 And since such things may be hidden by more complicated systems of references 
the problems cannot be easily detected.  And incremental methods may not be 
strong enough to gain traction for your solver to create possible plan states 
(or other plan domains) so your solver solver will have to be able to latch 
onto something that will build traction into its incremental trial and error 
methods as it progresses in its goal to create a domain that is suitable for a 
planner.


Oops. I am talking too much about my own ideas again. Sorry.  However, it all 
seems so relevant to the situation that you just described, and so relevant to 
the question of the problem of creating an actual AGI program that I can excuse 
myself.



On Fri, Dec 27, 2013 at 11:34 PM, Piaget Modeler <[email protected]> 
wrote:




So I'm writing my solvers, and I hit the first roadblock...
Given (1) a sea of sensory stimuli, (2)  a prioritized set of goals, and (3) a 
possibly empty set of plans, how does one select the relevant stimuli to form 
an initial state for a problem? 


Another way to ask the question is how do humans select relevant features from 
their current environment to be able to formulate or retrieve plans that 
address their goals? 


 And how many of these environmental features are enough to describe an initial 
state? 
So there is PDDL.  (Planning Domain Definition Language). But to use PDDL, one 
has to first solve the problem of describing the relevant features of the 
environment. 


How do we come up with these relevant features, to be able to formulate an 
initial state? 
Thoughts?  
~PM
(I forgot what Newell & Simon said about the PSCM  & Unified Theories of 
Cognition.

Time for more research...)                                        




  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  





-- 
Jim Bromer




  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to