There are already a couple of teams working on HW implementations of the CLA. IBM Research and Seagate are working on pure CLA HW. There is a scientist at Sandia National Labs also interested. And there is a program director at DARPA putting together a HW initiative for temporal learning algorithms within hierarchies. That was inspired by the CLA. These initiatives are very early but the principle scientists involved are serious about this. At some point it might make sense to have an entire section for people working on HW implementations. If anyone wants to be introduced to these scientists let me know and I will query their interest.
Jeff From: nupic [mailto:[email protected]] On Behalf Of Fergal Byrne Sent: Tuesday, July 16, 2013 2:29 PM To: NuPIC general mailing list. Subject: Re: [nupic-dev] Newbie question Hi Subutai, Brilliant to hear the historical background, don't misunderestimate (as GW Bush said) the importance of that for understanding of how the CLA works. One of Jeff's big motivations for the work you guys have been doing is that we (as humans) are able to do these amazing things with a very limited number of really slow neurons, with no ability to "upgrade" or, for example add hard drives for extra storage. We just have to be using a very clever, but very easily implemented, algorithm for this performance which is robust in every dimension of energy, material and time. So, a CLA (a 1mm squared, 1-layer, one region slice) should be implementable in any kind of computational environment, in any language, to an order of magnitude of performance. Developer and experimenter time is the limiting factor in the work right now. Once we prove this is how brains work, someone will (and probably is already working to) build this in hardware or figure out how to get your graphics card to process this. Regards, Fergal Byrne
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
