I have a paper that details how this works, send me an email if you'd like to get a copy. In PAM-P2 percepts (whether visual, auditory, proprioceptive, etc.) are asserted to the current model. These assertions activate "monads". Monads in turn activate schemes.Each scheme has a reifying monad which is activated by the scheme according to the scheme's merge type (PASSthrough, AND, OR, NAND, NOR, NOT). The merge type is defined by the relationship that the scheme represents. Basic relationships are UNISON, SERIES, OPTION, CASE, TYPE, etc. Activation flows along several dimensions: perception, expectation, intention, thus a monad can be activated in multiple ways. Percepts activate monads along the perception dimension. The entire image presented will ultimately activate a single reifier as it passes through several tiers of pattern abstraction. In addition the sound of the word "Chair" will be activated in sequence with the image and ultimately the reifying monad for the image and the monad for the word will be bound together in a series scheme. This will occur again for the remaining training instances. The series schemes will also be assessed by processes which will identify commonalities and predictions. As the training examples are repeated, predictions ensue and the monads are activated along the expectation dimension. When predictions are satisfied there is there is a shift which occurs from from sequence to concurrency and also from sequence to optionality, which happens as part of automaticity. Activation along the expectation dimension triggers simulation, whereby expectedmonads can be "visualized" or activated in the forward model. This activation is propagated throughout the forward model from reified monads down to perceptual monads. Another type of simulation is also possible, but we'll save that for another day. Check out the site http://piagetmodeler.tumblr.com for some diagrams of how this works. This is a good start. Cheers. ~PM.
From: [email protected] To: [email protected] Subject: RE: [agi] Re: Superficiality Produces Misunderstanding - Not Good Enough Date: Tue, 23 Oct 2012 10:14:15 -0700 We would teach the system, PAM-P2 for example, the same way we would teach an infant or toddler. We would show the picture, and then say the word "Chair" or have the word "chair" written under the picture. We would also teach the cognitive systemto say the word associated with the picture. We could do this for some number of training examples t < 25. we would then later prompt the system with a test image, and ask what it is, and hopefully the system will respond "Chair". Pretty much that's how it should happen. The cognitive system should learn to associate visual, auditory, proprioceptive, and other modalities within its current, forward, and episodic models in the same manneras children. ~PM. From: [email protected] To: [email protected] Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not Good Enough Date: Tue, 23 Oct 2012 17:29:23 +0100 PM & Aaron, You do realise that whatever semantic net system you use must apply to not just one chair, but chair after chair – image after image? Bearing that in mind, explain the elements of your semantic net which you will use to analyse these fairly simple figures as **chairs**:: http://image.shutterstock.com/display_pic_with_logo/95781/95781,1218564477,2/stock-vector-modern-chair-vector-16059484.jpg Let’s label these chairs 1-25 (going L to R from the top down, row after row) Start with just 1. and 2. top left and explain how your net will recognize 2 as another example of 1. How IOW do you define a “chair” in terms of simple abstract forms? Then we can apply your system, successively, to 3. 4. etc. This is the problem that has defeated all AGI-ers and all psychologists and philosophers so far. But Aaron (and PM?) has a semantic net solution to it - if you can solve jungle scenes, this should be a piece of cake. I am saying, Aaron, you do not understand this problem – the problem of visual object recognition/conceptualisation//applicability of semantic nets. You are saying you do – and it’s me who is confused. Show me. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
