2011/11/11 Gael Varoquaux <[email protected]>:
> On Fri, Nov 11, 2011 at 04:11:46PM +0100, Andreas Müller wrote:
>> > If you find that it does work/is useful on real problem, yes!
>> I just started working on it. Atm I can get 3% error on MNIST using
>> sklearn's SGD.
>
> Does sound good.

Doesn't a grid-searched gaussian SVM yield 2% or 1.5% test error on
MNIST? If the CPU efficiency is much better than SVM then it's still
good to have even if less accurate than kernel SVM.

> I find that one of the values of the scikit, and in particular its
> mailing list, is that empirical knowlegde that comes to coding and trying
> many methods. I am definitely exciting about the random features methods,
> as well as the Chi2 one of your colleagues.

+1

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to