I'm about to create and carry out some benchmarks of the CLA.

I would be happy to hear suggestions what to benchmark (keep it reasonable
for starters, I have nothing;) ), even more for help coding and carrying
the experiments and interpreting them.

I'm in a hurry so I'll see what I can do. This way I'd like to ask, can
Numenta or any of you share scripts or reports of some such benchmarks?

Things I'd like to (eventually) do:

1/ speed and memory requirements of the encoders/SP/TP:
(in regards to #columns), I've measured these, so will post results.

2/ (most interesting) information capacity of SP, TP and plasticity:
-given spatial pooler with fixed #cols (what's reasonable min? 512?)
see how many patters can it distinguish in a given number of learning
rounds.
-how fasts it adapts when I change the dataset?
-if I'm right, the theoretical capacity is insane: n!/(n-k)!k! for 1000cols
and 2% sparsity. What is the practical limit?

-for TP: given n-sequences, what's the max length f the sequences it can
recall?
-test with hardest sequences? (AAAAAAAAAAAAAAAAAAAAAAAAAAAAB)
-resistance to noice (I think Subutai did these? Could we have the graphs,
scripts, please?)

3/ some classical sequence mining, patterm matching datasets

4/ how do the patterns stabilize with hierarchy? (if I can run the code)

TY.

-- 
Marek Otahal :o)
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to