On Sat, Mar 17, 2012 at 4:44 AM, Alexandre Gramfort
<[email protected]> wrote:
> without the scale_C the libsvm/liblinear bindings are the only models
> whose hyperparameters
> depend on the training set size.

This statement doesn't sound true. Generally hyper-parameters
(especially ones to do with regularization) *do* depend on training
set size, and not in such straightforward ways.  Data is never
perfectly I.I.D. and sometimes it can be far from it. My impression
was that standard practice for SVMs is to optimize C on held-out data.
 When would the scale_C heuristic actually save anyone from having to
do this optimization?

Even if the scale_C heuristic (is it fair to call it that?) is a good
idea, My 2c is that it does not justify redefining the meaning of the
"C" parameter which has a very standard interpretation in papers,
textbooks, and other SVM solvers. If you really must redefine the C
parameter (but why?) then it would make sense to me to rename it as
well.

- James

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to