Hi,

that sounds interesting, could you tell us which platform you want to
target?

Just recently I started some work on reducing the footprint, you can see
initial code/ideas at https://github.com/numenta/nupic/pull/298

The theoretical mem footprint is for the TP:
#columns x #cellsPerColumn x #segmentsPerCell x #maxSynapsesPerSegment x
(sizeof InSynapse (class, float, int32)),
so eg: 2048 x 4 x 32 x 32 x (1B+4B+4B).

For better detailed math see nta/algorithms/ for data structures used in
the core classes.

>From the practical point of view, I think you;d be better of creating a
model and measuring the mem consumption empiristically, so you can judge if
it's feasible for your platform. I intend to do tests for huge networks,
but didnt carry it on yet.

Cheers, Mark



On Tue, Oct 22, 2013 at 11:20 AM, Laurent Julliard <[email protected]>wrote:

> Hi,
>
> I'm thinking of porting the CLA algorithms on a specific hardware platform
> but before doing so I'd like to have a rough idea of the memory footprint
> needed by the main objects and data structures used in the C++ code. Has
> anybody done this exercise before ?
>
> Regards,
> Laurent Julliard
>
> ______________________________**_________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/**mailman/listinfo/nupic_lists.**numenta.org<http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org>
>



-- 
Marek Otahal :o)
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to