On 11/04/2011 09:26 PM, David Warde-Farley wrote:
> On Fri, Nov 04, 2011 at 07:22:15PM +0100, Andreas Müller wrote:
>>>> Of course there are many other possibilities like pretraining,
>>>> deeper networks, different learning rate schedules etc..
>>>> You are right, this is somewhat of an active research field
>>>> Though I have not seen conclusive evidence that any
>>>> of these methods are consistently better than a vanilla mlp.
>>> http://www.dumitru.ca/files/publications/icml_07.pdf the table on page 7
>>> makes a pretty compelling case, I'd say.
>>>
>> These numbers are weired.
>> A basic grid search with rbf svm gives 1.4% error on mnist.
> This was on only 10,000 examples from MNIST (1000 digits per class).
> Back in 2007, SVM solvers weren't very fast, so they scaled back the problem
> a bit.
>
Oh, sorry. Just skimmed the paper as you might have guessed ;)


------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to