Hi Mark, We haven’t had too much discussion about the TP on this list but you ask some interesting questions below. We don’t really know a huge amount but we do know it can learn extremely long sequences. Consider that each transition in a sequence is represented by a number of segments. Each step at a minimum would consume at least activationThreshold segments since you need that many active columns to go on to the next step. So, one limit with our typical configuration is (128 segments per cell * 32 cells per column * 2048)/activationThreshold. That’s a sequence about 1/2 million steps long! We could theoretically “hand construct” a sequence that long and it should work.
In practice the length is likely to be a lot lower but it’s still probably pretty long (it would be interesting to try this out with random SDR’s). The length is not really the problem. The difficulty of the sequences (like the one you have below) is more interesting. We have some tests already in NuPIC of lower order vs high order sequences. Please take a look at this file: nupic/tests/integration/py2/nupic/algorithms/tp_test.py It would be really cool to expand on this type of test. There’s a lot more we could do to understand the TP better! —Subutai On Sun, Nov 17, 2013 at 3:00 PM, Marek Otahal <[email protected]> wrote: > I'm about to create and carry out some benchmarks of the CLA. > > > -for TP: given n-sequences, what's the max length f the sequences it can > recall? > -test with hardest sequences? (AAAAAAAAAAAAAAAAAAAAAAAAAAAAB) > -resistance to noice (I think Subutai did these? Could we have the graphs, > scripts, please?) > > >
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
