Mathieu Blondel <math...@mblondel.org> wrote: > This is also what RidgeClassifier does, only in a smarter way (Cholesky > decomposition is done only once regardless of the number of classes).
ADALINE used a gradient descent learning rule. The idea was to turn the knobs randomly, update on pen and paper, adjust the knobs, update, adjust again, etc. Then you could collect patterns in a ring binder and use the same "adaline" box for multiple pattern recognition tasks. This was e.g. used on submarines to process sonar data. Not really relevant today though, except in a museum. :-) Sturla ------------------------------------------------------------------------------ BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT Develop your own process in accordance with the BPMN 2 standard Learn Process modeling best practices with Bonita BPM through live exercises http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF _______________________________________________ Scikit-learn-general mailing list Scikit-learn-general@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/scikit-learn-general