Ian, Doug, Scott & Marek,

This is a really great discussion. We tried to discuss this months ago but
the knowledge we needed was not there in the group (very much including
myself). I have a couple of comments just to clarify the meaning of a few
terms:

Business:
When we say business we mean the experimenter's purpose in using NuPIC.
This is not restricted to an actual commercial application, but it conveys
that the experimenter is interested in something which may not be directly
produced in the brain equivalent system.

Classifier:
The classifier is an object which is instructed (or configured) to gather
certain statistics on the historical activity of the CLA in order to allow
the current (SP+TP) state to be converted into a datum which is of business
interest to the experimenter (e.g. what power consumption will be in 12
hours in hotgym). This datum is usually the value of one of the input
fields, a chosen number of steps ahead. The classifier may produce a number
of predicted inputs, each with a probability.

Reconstruction:
This is the reverse of what the SP does. It converts an SDR into the input
which (most likely) produced it. It can be used in a similar way to the
classifier, if you are interested in only one step prediction, by using the
predictive state as the "SDR" for reconstruction. Reconstruction uses the
statistics embodied in the feedforward permanences to compute the most
likely input bits from the predictive columns.

I have an idea which is a more brain-like way of doing the classifier (and
also does reconstruction):

Add a new SPRegion which has the same number of bits as the input, call it
"DecodeRegion". You add one of these for every task you would normally give
the Classifier (e.g. you'd add a "5StepRegion" and a "12StepRegion" for
those predictions). The input for these regions is the TP output from N
steps ago (5 and 12 in the example). To train the region, you force the
active columns to be the current input, and run the SP learning step to
train the permanences against the TP output from N steps ago. To decode a
prediction, turn learning off and feed the current TP into the region. The
(non-inhibited) pattern of activation potentials will give you the
prediction in terms of the (encoded) input. An algorithm similar to the
current classifier will be able to extract a similar value:probability set
from this data.

I believe that the above mechanism (or something similar) is one component
of the top-down communication in the brain, as it learns to "force" the
column activity in lower regions (V1 in your example, Ian) rather than the
inputs. This is easy to do by using these top-down signals: send them into
Layer 1, which raises the predictive potentials of Layer 3 cells likely to
predict the "intended" activation. This requires my prediction-primed SP,
of course!

Regards,

Fergal Byrne



On Sat, Nov 23, 2013 at 4:13 AM, Ian Danforth <[email protected]>wrote:

>
>
>
> On Fri, Nov 22, 2013 at 6:07 PM, Scott Purdy <[email protected]> wrote:
>
>> To be clear, when I say reconstruction in this email I am talking about
>> the process of 1) taking the columns for each predicted cell in the TP and
>> 2) selecting the connected input bits (possibly including the connectedness
>> as a weight) to those corresponding SP coincidences/columns and 3) using
>> the encoders to select the closest value to the selected input bits as the
>> predicted value.
>>
>
> 2->3 is where the juicy and interesting bits are. It would be good to pull
> out a version that still had this implemented from git history and review.
> There was also an exploration done by an intern that we could probably
> review and release.
>
>
>> On Fri, Nov 22, 2013 at 3:54 PM, Ian Danforth <[email protected]>
>> wrote:
>>
>> > Scott,
>> >
>> >  Reconstruction was implemented because it was much closer to biology
>> than any classifier.
>>
>> I think your rationale here is that there is information propagation
>> "downwards" in the brain and that this is what we are doing with
>> reconstruction. That may be true, but the way that the information is used
>> in reconstruction is different then the way it is used in feedback or other
>> processes that actually happen in the brain. So it is true that information
>> flows downward, but not true that the brain performs reconstruction with
>> that information.
>>
>
> I may be misunderstanding you, but that's demonstrably false. The Kanizsa
> triangle is a great example (
> http://en.wikipedia.org/wiki/Illusory_contours) where the brain is using
> a high level gestalt to reconstruct missing information. You see
> activations in V1 from these figures *as if* the input were a complete
> triangle. The feed back inhibitory pathway reduces the inhibitory signals
> in a select group of neurons that it expects to be firing and then the
> naturally noisy input is sufficient to cause actual feed forward input in
> those areas, reinforcing the perception that there are lines/edges where
> there are none. There are also cases where excitatory feedback drives lower
> level activations which is exactly what reconstruction does. It
> reconstructs missing parts of the input from whatever was fed in.
>
>  Ian
>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 

Fergal Byrne, Brenter IT

<http://www.examsupport.ie>http://inbits.com - Better Living through
Thoughtful Technology

e:[email protected] t:+353 83 4214179
Formerly of Adnet [email protected] http://www.adnet.ie
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to