Hi Aseem, I'd +1, +2 and +3 what Kevin light-heartedly says, with some comments.
What you learn in a good ML class is all extremely useful, even if (or perhaps because) it has very little to do with what we're up to. Most of the ML strategies you learn to use are actually *superior* to the CLA in certain contexts, because they are designed to solve certain problems. The same applies to general computing: we're not really good at doing numerical computations, so a calculator or spreadsheet will always outperform a brain by several orders of magnitude. Perceptrons and other NN strategies also can perform certain tasks effectively, when the problem domain provides the right circumstances. The difference with NuPIC is that we are not doing *machine learning* at all! As Jeff always introduces it: we study the principles which underly the neocortex, and then build intelligent systems which use these principles. The fact that the systems learn is only because we're trying to simulate how the brain learns. So it's *simulated intelligence* rather than machine learning - the machine learning is only the hoped-for outcome of the work, and the evidence that we're simulating the neocortex. Another, more philosophical difference is that we're acting more like natural selection than intelligent design. Your ML class is all about building programs which solve problems on complicated data. You can directly manipulate the parameters and sequence of execution of the bits of the program - you're building a learning machine. What we're doing is much more like what happens in nature - generate a structure, run it in real life, and evolve that structure based on what improves it. We make changes at the genetic level (e.g. change an encoder setting and re-run the data), rather than at the organism level (e.g. change the synaptic connections). Going back to the importance of ML, the proof that we're on the right track will use evidence based on the metrics used in ML (along with others), and the measures we use in setting up and comparing CLA's are all ML-derived (see the video on swarming, which explains the parameter optimisation based on minimising error metrics). On a hugely more general level, don't repeat the mistakes we all made (I'm talking about myself too!) and waste good teaching just because you don't value it at the time! Regards, Fergal Byrne On Sun, Sep 15, 2013 at 6:03 PM, Archie, Kevin <[email protected]>wrote: > Aseem, > > A bit of friendly unsolicited advice: if you're taking the class, learn > what they're teaching, even if it seems obsolete or irrelevant. This is a > good general rule, but especially for this case: I think the CLA's a really > interesting model, but it is after all another in a long line of > brain-inspired learning algorithms. I suspect it gets some things right > that others have missed, and I'm excited to see where it will lead over the > next 10-20 years, but it is after all a stripped-down model built on some > of what 5, maybe 10% of the cells in neocortical gray matter are doing--the > easy ones to monitor. It's almost certainly not the whole story of building > an intelligent system. > > Even in a CLA system, as things start getting complicated, I suspect there > will be moments of "oh, that part's just a > high-D-distribution-with-sparse-covariances/regression problem/linear > classifier, I can throw a PGM/Gaussian process/SVM at it to save a bunch of > cycles." Sometimes it will be useful to have some solid ML and stats > history and theory because you need to know what a part of a CLA-based > system is and is not. Deploy your cynicism late if at all. > > - Kevin > > p.s. of course this advice is more for past-undergrad-me, and to some > extent to today-still-making-the-same-mistakes-me, than for you, not that > I'm sure I would have followed it if I could have heard it way back when. > bonus advice for long-ago-me: pay more attention during statistical > mechanics. you'll want that later. > > p.p.s. sorry to all for the slight digression from usual topics on the > list. I'll behave. > > On Sep 12, 2013, at 6:10 PM, Aseem Hegshetye wrote: > > > Hi, > > I have grown up reading ON INTELLIGENCE and neuroscience and jeff > hawkins has shown how artificial perceptrons are incapable of achieving > what our brain does. Its weird to sit in a machine learning class which > always starts with gradient descents and then some classifying algorithms. > > And everyones busy taking down notes to score good grades. > > > > Aseem Hegshetye > > > > _______________________________________________ > > nupic mailing list > > [email protected] > > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > > ________________________________ > > The material in this message is private and may contain Protected > Healthcare Information (PHI). If you are not the intended recipient, be > advised that any unauthorized use, disclosure, copying or the taking of any > action in reliance on the contents of this information is strictly > prohibited. If you have received this email in error, please immediately > notify the sender via telephone or return mail. > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > -- Fergal Byrne ExamSupport/StudyHub [email protected] http://www.examsupport.ie Dublin in Bits [email protected] http://www.inbits.com +353 83 4214179 Formerly of Adnet [email protected] http://www.adnet.ie
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
