>>> Afaik, it was with a l1-penalized logistic. In my experience,
>>> l2-penalized models and less sensitive to choice of the penality
>>> parameter, and hinge loss (aka SVM) and less sensitive than l2 of
>>> logistic loss.
indeed.
> I think you need a dataset with n_features >> n_samples with ma
Le 1 avril 2012 16:38, Andreas a écrit :
> On 04/01/2012 04:34 PM, Gael Varoquaux wrote:
>> On Sun, Apr 01, 2012 at 04:23:36PM +0200, Andreas wrote:
>>
>>> @Alex, could you maybe give the setting again where you had
>>> issues doing grid search without scale_C?
>>>
>> Afaik, it was with a l1-penal
On 04/01/2012 04:34 PM, Gael Varoquaux wrote:
> On Sun, Apr 01, 2012 at 04:23:36PM +0200, Andreas wrote:
>
>> @Alex, could you maybe give the setting again where you had
>> issues doing grid search without scale_C?
>>
> Afaik, it was with a l1-penalized logistic. In my experience,
> l2-pe
On Sun, Apr 01, 2012 at 04:23:36PM +0200, Andreas wrote:
> @Alex, could you maybe give the setting again where you had
> issues doing grid search without scale_C?
Afaik, it was with a l1-penalized logistic. In my experience,
l2-penalized models and less sensitive to choice of the penality
paramete
> Something that bothers me though, is that with libsvm, C=1 or C=10
> seems to be a reasonable default that work well both for dataset with
> size n_samples=100 and n_samples=1 (by playing with the range of
> datasets available in the scikit). On the other hand alpha would have
> to be grid
On Sun, Apr 01, 2012 at 08:33:47AM +1000, Robert Layton wrote:
> In cases where it is ambiguous, I would be happy for a "er"
> convention, however if the algorithm is sufficiently named, stick with
> that.
+1
G
--
This S