>>> Afaik, it was with a l1-penalized logistic. In my experience,
>>> l2-penalized models and less sensitive to choice of the penality
>>> parameter, and hinge loss (aka SVM) and less sensitive than l2 of
>>> logistic loss.

indeed.

> I think you need a dataset with n_features >> n_samples with many
> noisy features, maybe using make_classification with a n_informative
> == 0.1 * n_features for instance:

exactly

I've discovered/suffered from the problem when writing the randomized
L1 logistic
code where the optimal C using a sample_fraction < 1 was leading to a bad
C for the full problem.

Alex

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to