Hi

I was wondering if there is any interest in implementing minibatch/batch
for the SGD algorithm. As I understand it, this is not implemented

"There is a compromise between the two forms, which is often called
"mini-batches", where the true gradient is approximated by a sum over a
small number of training examples."

http://en.wikipedia.org/wiki/Stochastic_gradient_descent


This would be doing a partial_fit (on small number of training examples)
but updating the weights only after each epoch rather than after each
training sample


as far as I can see it would only require a flag in the sgd_fast.pyx code.

thanks

Sean
------------------------------------------------------------------------------
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to