I'm not a part of the official team, but I'll tell you what I understand.
Biologically speaking, there's no way of discerning when a sequence ends,
as we live in a continuous time. The reset() method just clean the
predictive states in all the cells, so they won't affect in what cells will
become active with the next input. This should make learning faster, but I
think it shoudn't be necessary in the long term.

Cheers

On Mon, Dec 28, 2015 at 1:01 PM, Samuel O Heiserman <[email protected]
> wrote:

> Hi Nupic!
>
>    I'm a grad student working on creating my own implementation of a
> region of temporal memory, and I'm trying to get a really firm
> understanding of how the algorithm knows when a sequence has ended. As it
> stands now I have a 'sequence length' parameter, which is of course very
> limiting because I want it to find sequence patterns of whatever length are
> relevant for the given data set, and do so in the face of noise at all
> scales. If I remove this parameter though I would just have one giant
> sequence the length of the data set. I've looked at the code on github,
> though I'm still new to programming and have only been able to discern (I
> think) that it has to do with how many columns are bursting (like how
> surprised the memory is by a given input).
>     For example if I had the simple sequence
> 'A,B,C,A,B,C,D,A,B,C,A,B,C,D', I'd want it to extract both recurring
> sequences 'A,B,C' and 'A,B,C,D', though my current system of course doesn't
> do this. Any insights are great appreciated!
>
> Thanks,
>
> -- Sam
>

Reply via email to