As I understand it, instead of computing new values for each column/cell
(as the brain does, because it's totally local and parallel), we would
compute only those cells where some bit changed and is 1 (this requires we
have links: "input bit"-to what cells), since only 2% are on in a SDR, this
is the 50x.
..or I'm completely wrong :)

Cheers, Mark


On Thu, Sep 19, 2013 at 4:13 PM, Oreste Villa <[email protected]>wrote:

> Hi all,
>
> I am trying to make sense to the following comment that I got a couple of
> weeks ago from Subutai regarding 50x speedup on the new algorithm for the
> TP.
>
> This is the comment:
>
> "TP is inherently slower and it was more challenging to optimize it. In
> the TP now rather than iterating over all the cells we iterate over all the
> ON input bits. The end result is identical but since we have about 2%
> sparsity in input bits, the latter method is about 50x faster. After
> optimizations, the SP and TP are now at roughly the same level in timing
> profiles. When the TP is more "full" of segments it becomes slower than SP
> again."
>
> Is there anybody available to explain how to exactly fit this into the
> white-paper pseudocode for the TP?
>
> In return, if I understand it enough and the topic comes up,
> I volunteer to help writing the "white paper 2.0" :-)
>
> Thanks,
>
> Oreste
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 
Marek Otahal :o)
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to