Hello all, *TL;DR*: I'd like to implement Catalyst-SVRG <https://arxiv.org/pdf/1712.05654.pdf>, an accelerated optimization algorithm for sklearn (or scikit-learn-contrib/lightning, if it is more appropriate). Any feedback?
*Long version*: I've been playing around with Catalyst-SVRG <https://arxiv.org/pdf/1712.05654.pdf>, an accelerated stochastic variance reduced optimization algorithm for my research. I've found in my experience and in the experiments section of the attached paper that this algorithm does lead to faster optimization than vanilla (un-accerelated) SVRG <https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf>, which itself is much faster than SGD and on roughly the same footing as SAG/SAGA. Moreover, the per-iteration computational complexity of this algorithm practically matches that of SVRG. I was wondering whether it would be beneficial to the community if I implemented this algorithm in sklearn/linear_model or perhaps in scikit-learn-contrib/lightning. I would love to hear your thoughts on this. Cheers, Krishna
_______________________________________________ scikit-learn mailing list scikit-learn@python.org https://mail.python.org/mailman/listinfo/scikit-learn