Cheers Jeff,

I've been looking at a lot of the code (for the first time in earnest, I'm
ashamed to say), and indeed there is a histogram for each cell. It's called
a BitHistory and effectively stores a running (age-weighted) average of the
number of times each input value range has occurred when the cell became
active.

As Jeff says, this was an engineering/business decision and it works very
well. When we don't have a good, effective and efficient mechanism based on
neuroscience, we should substitute for it with something which does the
job. The neocortex is doing this using the multi-layer structure, the
complex connection structures in a region (and the various species of
neurons), hierarchy, the thalamus, and several other things we don't know
about. Good engineering practise says we should solve one problem at a
time, so the current classifier is solving the problem of making a specific
prediction for a specific field a specific number of steps ahead.

I see the classifier, not as a totally artificial engineering artefact, but
as a kind of tuneable window on what a full cortical system is doing. The
histogram in the classifier is completely analogous to a dendrite, in that
it is storing an age-decaying probability distribution between inputs and
activation.

In the real neocortex, I'm pretty sure there is a two-way connection
between the cells and the inputs, and there is also a way of coding the
time delays so that we can extract the predictions across several
timesteps. In fact, I believe what the neocortex does is generate all
possible information from the combination of memory and the recent input
data, and let the users of that information learn how best to exploit it.
The most useful outputs are retained and the less useful ones decay over
time.

If this is the case, the classifier is just selecting a particular subset
of the information being (potentially) generated by the region. When you
set up the classifier, you're effectively building the bits of neural
machinery to produce the prediction you're interested in. This can be also
be regarded as a shortcutted evolution of the development of specific
"windows" into the computational output of the region.

Regards,

Fergal Byrne



On Sun, Oct 20, 2013 at 7:06 PM, Jeff Hawkins <[email protected]> wrote:

> Hi,****
>
> Fergal is correct.  “Reconstruction” was the first approach we used to
> take the state of the CLA’s predicted cells and turn it into a value that
> can be used outside of the CLA.  It worked fine.  The problem we had was
> customers often didn’t want to know the prediction for the next data
> point.  They wanted a prediction for say one hour in advance, or every hour
> for twenty four hours.  Reconstruction didn’t suffice.  The solution was to
> implement a separate classifier.  We stored the state of the CLA in a
> buffer and when the appropriate input actually arrived (say one hour hence)
> we could pair the earlier state of the CLA with the correct input.  The
> classifier doesn’t need to know what cells are in the predicted state only
> what cells are active.  We had some options on how the implement the
> classifier.  I believe what we implemented was we kept a histogram of input
> values for each cell.  When a cell became active we updated the histogram.
> To make a prediction we combined the histogram of all the currently active
> cells.  This worked well.  A better prediction could be created if we kept
> a histogram for each active dendrite segment instead of each active cell.
> This would take more memory and more training data so we didn’t pursue it.
> ****
>
> ** **
>
> If a customer wanted to make a prediction one hour and two hours in
> advance we would implement two classifiers.****
>
> Jeff****
>
> ** **
>
> *From:* nupic [mailto:[email protected]] *On Behalf Of *Fergal
> Byrne
> *Sent:* Saturday, October 19, 2013 3:35 AM
> *To:* NuPIC general mailing list.
> *Subject:* Re: [nupic-dev] Regarding CLA****
>
> ** **
>
> Hi Aseem, ****
>
> ** **
>
> The grannies idea came from the go-back to the bits idea, which I believe
> the Numenta guys call "reconstruction". That strategy did not use cells and
> reverse dendrites, but a procedural equivalent. They'll tell you why this
> was abandoned in favour of the lookup tables for Grok.****
>
> ** **
>
> Regards,****
>
> ** **
>
> Fergal Byrne****
>
> —
> Sent from Mailbox <https://www.dropbox.com/mailbox> for iPhone****
>
> ** **
>
> On Sat, Oct 19, 2013 at 8:52 AM, Aseem Hegshetye <[email protected]>
> wrote:****
>
> Hi,
>
> This was one of the awesomest replies i have got on this emailing list.
> Grannies idea looks great. we should try simulating it Fergal.
> Even I thought of something that would replace those look up tables.
>
> When we have cells in predictive state, its like predicting that the input
> bits are going to fire that column.
> So if we go reverse and check that columns weight to all 121 input bits we
> can infer the next input. Plus we have 40 different columns in predictive
> state, whose weights with input patterns add up to give a descent
> prediction.
> I dont know if its already been tried , but if not I am curious to try,
> and i have already started simulations. Also Grannies idea needs to be
> tried.
>
> regards
> Aseem Hegshetye
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org ****
>
> ** **
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 

Fergal Byrne

<http://www.examsupport.ie>Brenter IT
[email protected] +353 83 4214179
Formerly of Adnet [email protected] http://www.adnet.ie
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to