Our assumptions about AGI define us. 
This is good.  It means that there will be a variety of approaches, in a 
systematic and hopefully scientific manner, and perhaps some approaches will be 
viable where as others will prove ineffective.  That's what inquiry is all 
about.
If we all agreed, we'd probably get nowhere faster, because of our blind spots. 
~PM
Date: Mon, 30 Dec 2013 11:53:23 -0500
Subject: Re: [agi] Problem [Space] formulation
From: [email protected]
To: [email protected]

I really do not see how the assumption that an AGI program can detect features 
other than the primitive features of a sensor can be taken as a grand stepping 
off point for the higher level analysis and reasoning that you want to design 
into a program.  John Rose talked about the correlative structures of the 
world, and this might be used as a good basis of further analysis of how an 
intermediate stepping off point would work.  But the idea that something like 
autoencoders, PCA or perception and categorization are sufficient to explain 
how features may detected from the sensor environment should be something that, 
if the theory was true, would be feasible to prove right now - like my claim 
that an AI program could learn to use CF Grammars through trial and error 
learning is something that should be easy to demonstrate. 

If the significant features that were needed for an intelligent comprehension 
of events of a data environment were easy to detect then AGI would be easy.  
The whole problem has been that this initial stage of the detection of 
effective features has not been easy.

Jim Bromer

On Mon, Dec 30, 2013 at 3:13 AM, Piaget Modeler <[email protected]> 
wrote:




I agree that goal directed behavior is at the top of the pyramid of 
functionality, with perception (observation) and categorization (coordination) 
at the base.
I disagree because  I do believe Babies plan.  I also believe babies featurize 
their 
environment.  I think they use the features to plan.  One basic plan that 
babies employ to achieve their goals is called "Cry".  Babies use that plan in 
numerous 
situations, and possibly when all other plans fail. 
My question was different. It was, given a grand number of features available 
at any 
given point, how do we choose the most "relevant" ones in order to formulate 
the current state.  Or in the case of a planner, an initial state. 
A few approaches exist:  (1) Pick a few features at random; (2) sort the 
features in some 
way and pick the first N features; (3) pick no features, and instead plan using 
an empty initial state and a goal state; (4) see if any correlation exists 
(temporal concurrence, 
temporal sequence, etc.) between features and the goal, if so, pick the initial 
state features based on this correlation.  Doubtless there are other possible 
answers as well. 

Perhaps experimentation with all these approaches should be done to determine 
which works best. 

~PM
Date: Sun, 29 Dec 2013 22:54:58 -0600
Subject: Re: [agi] Problem [Space] formulation

From: [email protected]
To: [email protected]

First comes awareness, then comes play, then comes planning. Babies don't plan 
anything. They just observe and correlate. They pick out salient features based 
on what allows them to predict the most about future observations, particularly 
those relating to reward or suffering. (Human faces and voices, among other 
things, are probably recognized a priori as salient through hardwired 
mechanisms.) Play allows the non-goal directed testing of observed correlations 
between salient features. Planning can only take place after you have started 
to recognize that salient feature X leads to salient feature Y, which is 
associated with reward or relief from/avoidance of suffering. Build the 
foundation first, in the form of a salient feature recognition engine -- an 
autoencoder* or some type of autocorrelation technique -- and then implement 
your planning engine on top of that. Goal directed behavior is at the top of a 
pyramid of functionality with perception and categorization at the base.


*Not sure if you're familiar with autoencoders. They are a technique used for 
deep learning, among other things. The idea is to teach a pair of neural 
networks or other classification-learning algorithms to learn a compressed, 
(nearly) lossless encoding, which by necessity pulls out the most salient 
features of the input, and then to use the compressed representation as the 
input to the next layer, or in your case, the input to the planning engine. 
(The decoder is typically thrown away once the encoder/decoder pair has been 
trained.) It's similar in function to principal component analysis, which could 
also possibly serve your purposes here.



On Sat, Dec 28, 2013 at 11:14 AM, Piaget Modeler <[email protected]> 
wrote:




Thanks Jim,
Much to think about...
~PM

Date: Sat, 28 Dec 2013 08:33:22 -0500
Subject: Re: [agi] Problem [Space] formulation
From: [email protected]


To: [email protected]

You could, for example, try to write a planner for solving the problem of 
selecting the relevant factors to form an initial state (and all the other 
possible states) for your lower solver to work with. To do this you would 
probably have to use some kind of abstractions.  Since you want it to be based 
on solid scientific methods, you might try something like, material objects, 
properties and relations between objects.  This you might find would not work 
so well since you want an AGI program to deal with imaginary things like 
fantasies and creative solutions to problems.  So then you might start by 
working with something like ideas, concepts and conceptual relations.  However, 
this does not solve the representation problem so then you have to include 
something about representations.  However, you are back to the earlier problem, 
you cannot 'represent' every thing that you can think about so you might try to 
find a slightly better word-concept to use, like 'symbols' or 'references'.  
Even though you cannot 'represent' everything there is about the universe, for 
example, you can still refer to it. Since you will need a way to represent the 
symbolic references you might try to rely on grammatical primitives like 
'nouns' and 'verbs'.  This however implies that every concept-situation you 
might want your solver to represent could be represented with individual words. 
 This does not quite make sense so you might then use actions and objects as 
part of your native abstractions for the solver to use in solving the problem 
of building a state system (or other kind of system) for a higher level (or 
more primitive) solver to solve.  And you might try to think about using 
something like computational syntax instead of linguistic syntax, since, your 
solver is going to be a computational program and it will act as if it 
operating on computational strings anyway.  These ideas however, still do not 
solve the problem because it is much too abstract and too full of gaps.  And 
while the abstractions may seem like they are ideal for representing any 
possible situation, it turns out they are not because they are themselves 
possible states of thought.  So, for another example, can you have a 
representation of something that cannot be represented?  Well yes, since you 
can represent a reference to it.  But that means that your solver has to be on 
guard against getting caught into illogical representations that refer to 
things that cannot be referred to in the way your abstract program is going to 
try to refer to them.  While the comprehension and representation of the entire 
universe (for example) may not represent much of a problem to anyone other than 
a fool, it does show how problems of symbolic representations and references 
can creep into your solver when you start with higher abstractions.  So now you 
need to add an incremental trial and error method to detect such possibilities. 
 And since such things may be hidden by more complicated systems of references 
the problems cannot be easily detected.  And incremental methods may not be 
strong enough to gain traction for your solver to create possible plan states 
(or other plan domains) so your solver solver will have to be able to latch 
onto something that will build traction into its incremental trial and error 
methods as it progresses in its goal to create a domain that is suitable for a 
planner.



Oops. I am talking too much about my own ideas again. Sorry.  However, it all 
seems so relevant to the situation that you just described, and so relevant to 
the question of the problem of creating an actual AGI program that I can excuse 
myself.




On Fri, Dec 27, 2013 at 11:34 PM, Piaget Modeler <[email protected]> 
wrote:




So I'm writing my solvers, and I hit the first roadblock...
Given (1) a sea of sensory stimuli, (2)  a prioritized set of goals, and (3) a 
possibly empty set of plans, how does one select the relevant stimuli to form 
an initial state for a problem? 



Another way to ask the question is how do humans select relevant features from 
their current environment to be able to formulate or retrieve plans that 
address their goals? 



 And how many of these environmental features are enough to describe an initial 
state? 
So there is PDDL.  (Planning Domain Definition Language). But to use PDDL, one 
has to first solve the problem of describing the relevant features of the 
environment. 



How do we come up with these relevant features, to be able to formulate an 
initial state? 
Thoughts?  
~PM
(I forgot what Newell & Simon said about the PSCM  & Unified Theories of 
Cognition.


Time for more research...)
-- 





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to