All, This is my first contribution to the mailing list but have been reading and following Numenta for a while. I attended the first couple HTM workshops way back out of curiosity. My background is in mechanical engineering and robotics and I know very little about biological neuroscience or computational neuroscience aside from what I have picked up from following your progress (which I regard as the most interesting subject on earth.)
Naturally the prospect of motor control with CLA makes me very excited. In Jeff's speech about motor control given at the hackathon he mentioned that it should be possible to make robots and control them in a virtual environment in order to test new implementations. It reminded me of a paper published in 2009 that seemed to me to have done a really nice job of creating an environment for very simple motor control and then more or less 'swarming' over different parameters; it produced very thought provoking results. You've probably seen it already, as it was widely blogged about at the time: On a blog: http://blogs.discovermagazine.com/notrocketscience/2009/08/17/robots-evolve-to-deceive-one-another/#.UngA-ZSxOeF Full Text: http://www.pnas.org/content/106/37/15786.full.pdf+html?with-ds=yes I'm pointing it now because it seems like it could be a useful template. One could replace the less complex neural network with a CLA and replicate the results, or even add a CLA on top of the existing behavior generating 'old brain.' etc. I have been trying to understand Gazebo and Ros enough to try this by the time CLA's had motor feedback, but I'm not quite there yet. In the near future when motor control via CLA is more functional, I think it would be neat to use it to control elegant and inspirational toys, a-la Sony's Aibo but on a more hacked-together-and-funded-via-kickstarter level. Best, Dave Petrillo On Tue, Oct 29, 2013 at 9:42 AM, Fergal Byrne <[email protected]>wrote: > Cheers Aseem, > > The column activation in the CLA is a simplification, which reflects the > fact that the feed forward axons pass up through the column and so all the > cells are getting similar inputs. > > One advantage of maintaining individual feed forward dendrites for each > cell is that the cells will form better connections to inputs which they > had predicted, thus improving the detection of all the sequences the CLA is > learning. > > The other advantage is that reconstruction less ambiguous. Each cell is > detecting "these inputs during this sequence" so the reconstruction is more > precise. > > I've identified the biological basis for this extension. Before the inputs > are considered, the previous pattern of activity feeds into the cells via > distal dendrites, which causes many cells to depolarise (raise their > potential towards the activation threshold). When you add the feed forward > inputs, the first cell to fire has the highest predictive + feed forward > potential. This cell is chosen to activate and may vertically inhibit the > others. > > This suggests that predictive potential could be added to feed forward in > the SP to improve the identification of the right SDR. > > Regards > > Fergal Byrne > > — > Sent from Mailbox <https://www.dropbox.com/mailbox> for iPhone > > > On Tue, Oct 29, 2013 at 11:17 AM, Aseem Hegshetye > <[email protected]>wrote: > >> Hi, >> >> I was thinking of a reconstruction method because look up tables looked >> to artificial to me but I do get it that reconstruction from network itself >> is very important for motor implementation. I dont know if it would be >> possible to convert outputs from look up tables back as an input to motor >> system with much efficiency. That was great Fergal ! >> >> Probably current method of converting raw input data to a 121 bits input >> pattern is not the best way we can provide data to CLA. >> If inputs also can be diffusely represented before putting them into the >> CLA like sensory inputs from medial lemniscus pathway that would be more >> awesome i guess. I did not understand much what Fergal said about adjusting >> those buckets, but I am looking forward to the talk. >> >> There is definitely a lot of load being put on cla due to lack of >> preprocessing. I have seen some projects where raw images were fed to CLA. >> In Humans we have retina, LGN than the cortex. As CLA is a layer of >> Cortex, its hard to imagine connecting optic nerve to occipital cortex >> directly. 4 layers in retina simplify an image to such an extent that it >> becomes very easy for cortex to extract invariant patterns out of it. Also >> LGN provides help in arranging the topography. If raw input gets >> preprocessed (i dont know how) that would be great. >> >> If a column is representing something and active cells in that column >> vary with respect to past events, then PROBABLY every cell in a column will >> have same connection permanence with input bits. It needs to be verified. >> >> >> That review paper provided by Jeff Hawkins was good. Now I am getting a >> surface idea of how motor reinforcement will help in better predictions. >> Going through these discussions, suddenly this brain appears to be so >> simple and comprehensible . >> Something great is on its way !! >> >> Aseem Hegshetye >> >> _______________________________________________ >> nupic mailing list >> [email protected] >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >> > > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
