On 04/01/2012 09:27 PM, Alexandre Gramfort wrote:
>>>> Afaik, it was with a l1-penalized logistic. In my experience,
>>>> l2-penalized models and less sensitive to choice of the penality
>>>> parameter, and hinge loss (aka SVM) and less sensitive than l2 of
>>>> logistic loss.
> indeed.
>
>> I think you need a dataset with n_features>>  n_samples with many
>> noisy features, maybe using make_classification with a n_informative
>> == 0.1 * n_features for instance:
> exactly
>
> I've discovered/suffered from the problem when writing the randomized
> L1 logistic
> code where the optimal C using a sample_fraction<  1 was leading to a bad
> C for the full problem.
>
I created an issue to discuss this further. I'd really like to have a 
good solution,
as I really like to address this in the release.

------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to