On Fri, Aug 30, 2013 at 8:30 PM, Tim Boudreau <[email protected]> wrote:

> On Fri, Aug 30, 2013 at 5:31 PM, Jeff Hawkins <[email protected]>wrote:
>
>> >> Sorry, I couldn’t follow your question.
>>
>>
>
Hi, Jeff, et. al.,

I don't mean to harp on this, but this question slipped through the cracks,
and it seems like a pretty fundamental one.  Bad question?  Unanswerable?
 Something else?

In a nutshell:  If a good prediction for several steps in advance is
indistinguishable from a bad prediction for one step in advance, how do you
avoid penalizing synapses which make correct predictions for further in
advance than one step?

I understand there's a "classifier" which iterates snapshots and
monday-morning-quarterbacks the synapses, but also that it's a short-term
hack, not the way things are supposed to work.  Here's the more specific
explanation of the question:


> Let me try to clarify.  It's really an implementation-in-software
> question, but one that must be backed by biology.  Here are some facts as I
> understand them:
>  - There are three states a cell can be in - not-active,
> activated-from-input or predictively-activated.
>  - An incorrect prediction results in reducing the permanence of the
> associated synapse.
>  - A predictively activated cell may be making a prediction about several
> steps into the future.
>
> With one bit of information - predictive or not - it is impossible to tell
> the target step of a prediction.  Let's use your ABCDE example.  E
> eventually forms connections with C so that E is active when C is active.
>  But E is making a correct prediction for two steps into the future.  But
> it is a wrong prediction for one step into the future.  So the C->E synapse
> will be weakened because of the incorrect prediction.
>
> Given repeated ABCDE inputs, what it sounds like will happen is that
>  - an C->E synapse will form,
>  - when the next C comes up it will predictively-activate
>  - when the next input is actually D, the C->E's permanence will decrement
> because E did not follow C
>  - after a few cycles it will disappear (permanence < threshold = invalid)
>  - the next E input will increment it back into validity
>  - and so forth, forever, winking into and out of existence
>
> This happens because a single bit of information (predictive or
> not-predictive) - is insufficient to determine how many steps into the
> future a prediction is *for*.  So a prediction about >1 step into the
> future is an incorrect prediction about 1 step into the future, but there
> is no data structure that I've read about that carries the information
> "this is a prediction for two steps from now".  Which seems like a problem
> - ?
>
> 3) Temporal pooling.****
>>
>> This is when cells learn to fire continuously during part or all of a
>> sequence.
>>
>
> It seems like the design for a single layer precludes that - anything
> which learns to fire continuously will quickly unlearn it, relearn it,
> unlearn it, relearn it.  I could imagine feedback from another layer that
> can recognize ABCDE as a whole might reinforce our C->E connection so it
> does not disappear.  Is that where this problem gets solved?
>
>
>> **
>>
>> That actually brings up a question I was wondering about:  When we talk
>> about making predictions several steps in advance, how is that actually
>> done - by snapshotting the state, then hypothetically feeding the system's
>> predictions into itself to get further predictions and then restoring the
>> state;  or is it simply that you end up with cells which will only be
>> predictively active if several future iterations of input are likely to
>> *all* activate them?****
>>
>> ** **
>>
>> >> When NuPIC makes predictions multiple steps in advance, we use a
>> separate classifier, external to the CLA. We store the state of the CLA in
>> a buffer for N steps and then classify those states when the correct data
>> point arrives N steps later.
>>
>
> So, you develop the ability to say "if this cell is active, it's actually
> a prediction about N steps in the future" and just use that against the
> system not in learning mode?  Or is the classifier output fed back into the
> dendrite segment layout and permanence values of synapses somehow?
>
> Thanks,
>
> -Tim
>
>


-- 
http://timboudreau.com
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to