Hi Subutai,

thank you very much for your answer. Apparently I do not understand correctly, 
when a cell is considered to be predicted and should be re-used when the 
sequence occurs again.

> If I understand your diagram correctly, picture (h) is the key one. In this 
> case you want a different cell in V to become active. The first time through 
> (h), when you see E that column bursts and the old V cell will become 
> predicted. The next time through though E will not burst and the cell in E 
> you show as L will get predicted. Then when V happens, it will burst because 
> it has not been predicted and a new cell in V will get chosen to be the 
> learning cell. And so on.

Something is not clear for me, this is how I understand it:
If the next time the sequence occurs, "the cell in E I show as L is predicted", 
it will get active at the next time step, because E actually occurs. This 
activation will cause the cell in V to be predicted and therefore the column 
will not burst. You meant it will burst, because it would not be predicted. I 
thought, that a cell becomes predicted as soon as enough cells are active which 
have a forward connection to that cell. With desired local act of 1, it would 
be enough if the cell in E is active and predicts the cell in V. Please tell me 
where I am wrong.

Thank you again and all the best,
Stefan

> 
> A lot depends on how you set the parameters, but I believe this is the 
> correct intuition.
> 
> In NuPIC, you can look at the (admittedly complex) test script 
> $NUPIC/examples/tp/tp_test.py  In there we construct pretty high order 
> sequences such as:
> 
>    A B 0 1 2 3 4 E
>    G H 0 1 2 3 4 I
> 
> In this example, in order to make the correct predictions for the last 
> element, the TP has to learn a very high order sequence. To learn this the TP 
> will likely have to see several repetitions.
> 
> —Subutai
> 
> 
> 
> On Wed, Dec 11, 2013 at 10:58 AM, Stefan Lattner <[email protected]> 
> wrote:
> Hey guys!
> 
> I am new to this list, just found out that it even exists.
> 
> Since the white paper on the CLA was published in 2011, I am working on a 
> JAVA implementation of that model.
> Because there was no such in NuPIC in 2011, I wrote everything from the 
> scratch. Many questions came up during this process but there was no detailed 
> description on how it works the white paper was the only resource I had.
> 
> That is kind of sad, I would have had many questions. I am currently 
> finishing my masters thesis where I am writing about everything I found out 
> about the HTM and CLA during my experiments, because I didn't expect Numenta 
> to come up with more details (which is actually the case, except the code 
> Numenta is providing).
> 
> A very important question for me is still that, according to Numenta, the CLA 
> should have a variable Markov Order. However, the way learning is described 
> in the CLA white paper leads to an order of two only.
> 
> I made an illustration for my thesis how learning takes place, this is how I 
> understand it.
> (At first, the sequence E-V-E-N is learnt (Desired local activity = 1). Then, 
> the HTM is reset and E-V-E-N is provided to the HTM again. Then, the HTM is 
> not reset while E-V-E-N is provided again).
> Can somebody tell me how the order of learnt chains can get increased or what 
> I am understanding wrong?
> 
> Thank you and all the best,
> Stefan
> 
> <temporal_pooler.png>
> 
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
> 
> 
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to