One last try... God help me...
Let's assume a 640 x 480 visual field with black and white pixels for 
simplicity.
There are two approaches here, let's take what I call the raw approach. 
One frame is grabbed. Each pixel of the frame is asserted into the database. 
Each pixel is asserted to the model using the proposition PERCEPT(Visual, ' 
PIXEL x y c '),for example 
       PERCEPT(Visual, ' PIXEL  240 320 W ')
       ==> percept-76855
where "percept-76855" identifies that a specific monad which represents a white 
pixel at x,y location 240 x 320 has been activated.   

Once the entire frame is asserted to the model.  A concurrence activation 
process looks to chunk groups of percepts that have the same activation time 
but are not grouped together.  A UNISON relationship is synthesized for each 
such chunk indicating that these percept monads occurred at the same time.  For 
example, 
        UNISON( < percept-76855, percept-672232, percept-89004, percept-74325, 
percept-20876, percept-45694, percept-33368 > )
       ==>  unison-35437
More unison relationships are formed on this tier.  Then unison relationships 
are formed on the next tier.  For example, 
       UNISON( < unison-35437, unison-89465, series-89033, unison-23455, 
series-45688, series-98809, unison-78900 > )
       ==>  unison-1245388
This goes on until we have a final tier which represents the entire frame, lets 
call it unison-9426772
These monads and schemes (or monemes as I call them) are the elements that 
comprise your image at this point. 
We begin again with the next frame.  But now, instead of having to construct 
each percept and intervening scheme,many of the schemes and reifying monads  
will be reused because they already exist in the model.   
The second approach is to assert visual features rather than pixels but the 
principal is the same.For example, we could use OpenCV to ascribe various 
features to the incomincg frame and only assert those features.  Hence, 
        PERCEPT(Visual, ' EDGE  ... ')
       ==> percept-93453
        PERCEPT(Visual, ' CORNER  ... ')
       ==> percept-93454
        PERCEPT(Visual, ' CIRCLE  ... ')
       ==> percept-93455
So that these can be grouped at a higher level of visual perception.
So the elements you refer to are the percepts and intervening schemes (the 
monemes) that comprise the total experience, within the current model.
Does that answer your question, if not why not? These are the elements.
Then there are transformations that occur upon these elements. 
~PM.


From: [email protected]
To: [email protected]
Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not Good 
Enough
Date: Tue, 23 Oct 2012 21:58:51 +0100







PM,
 
You have to identify the **elements** or “monads” pace you that are common 
to those chairs – or just begin to identify *one* element common to them. [No, 
a 
complete analysis is not expected]. You haven’t done this. And these figures 
are 
not hard to analyse.
 
Supplying me with a long architecture or analysis of your system does not 
address that problem in any way at all. There is nothing in what you have 
written that addresses: what are the elements common to the different examples 
of a concept/ chair, or a visual object, (or scene, or text anything else). You 
just **presuppose** that you can analyse them without the slightest attempt at 
a 
demonstration/instantiation..
 
The problem of AGI is that of multiformity/novel transformations -  we 
are always faced with different objects/scenes where there may be no common 
configurations of common elements – where each example can be considered as a 
creative, novel transformation of the last – (see those chairs again) 
-   and yet the brain has done what no AI system or technology has 
achieved, and found a way to classify them together. You’re not addressing 
that. 
Neither AFAIK is anyone else.
 
 
 
 
 

From: 
Piaget Modeler 


Sent: Tuesday, October 23, 2012 6:47 PM
To: AGI 

Subject: RE: [agi] Re: Superficiality Produces Misunderstanding - 
Not Good Enough
 

I have a paper that details how this works, send me an email if 
you'd like to get a copy. 
 
In PAM-P2 percepts (whether visual, auditory, proprioceptive, etc.) are 
asserted to the  
current model.  These assertions activate "monads".  Monads in 
turn activate schemes.
Each scheme has a reifying monad which is activated by the scheme according 
to the 
scheme's merge type (PASSthrough, AND, OR, NAND, NOR, NOT).  The merge 
type is 
defined by the relationship that the scheme represents.  Basic 
relationships are 
UNISON, SERIES, OPTION, CASE, TYPE, etc. 
 
Activation flows along several dimensions: perception, expectation, 
intention, thus 
a monad can be activated in multiple ways.   Percepts activate 
monads along the 
perception dimension.  The entire image presented will ultimately 
activate a single 
reifier as it passes through several tiers of pattern abstraction. In 
addition the 
sound of the word "Chair" will be activated in sequence with the image and 

ultimately the reifying monad for the image and the monad for the word will 
be 
bound together in a series scheme.  
 
This will occur again for the remaining training instances.  The 
series schemes 
will also be assessed by processes which will identify commonalities and 
predictions.
 
As the training examples are repeated, predictions ensue and the monads are 

activated along the expectation dimension. When predictions are satisfied 
there is 
there is a shift which occurs from from sequence to concurrency and also 
from 
sequence to optionality, which happens as part of automaticity. 
 
Activation along the expectation dimension triggers simulation, whereby 
expected
monads can be "visualized" or activated in the forward model.  This 
activation is 
propagated throughout the forward model from reified monads down to 
perceptual 
monads. 
 
Another type of simulation is also possible, but we'll save that for 
another day.
 
Check out the site http://piagetmodeler.tumblr.com  for some diagrams 
of how 
this works.  
 
This is a good start. 
 
Cheers.
 
~PM.
 
 



From: [email protected]
To: [email protected]
Subject: Re: [agi] Re: 
Superficiality Produces Misunderstanding - Not Good Enough
Date: Tue, 23 Oct 
2012 17:29:23 +0100






PM & Aaron,
 
You do realise that whatever semantic net system you use must apply to not 
just one chair, but chair after chair – image after image?
 
Bearing that in mind, explain the elements of your semantic net which you 
will use to analyse these fairly simple figures as **chairs**::
 
http://image.shutterstock.com/display_pic_with_logo/95781/95781,1218564477,2/stock-vector-modern-chair-vector-16059484.jpg
 
Let’s label these chairs 1-25  (going L to R from the top down, row 
after row)
 
Start with just 1. and 2. top left and explain how your net will recognize 
2 as another example of 1.
 
How IOW do you define a “chair” in terms of simple abstract forms?
 
Then we can apply your system, successively, to 3. 4. etc.
 
This is the problem that has defeated all AGI-ers and all psychologists and 
philosophers so far. 
 
But Aaron (and PM?) has a semantic net solution to it -   if you 
can solve jungle scenes, this should be a piece of cake.
 
I am saying, Aaron, you do not understand this problem – the problem 
of  visual object recognition/conceptualisation//applicability of semantic 
nets.
 
You are saying you do – and it’s me who is confused. Show me.
 
 
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to