here is the PR
https://github.com/scikit-learn/scikit-learn/pull/438
Alex
On Fri, Nov 11, 2011 at 3:53 AM, Gael Varoquaux
wrote:
> On Thu, Nov 10, 2011 at 09:58:16PM -0500, Alexandre Gramfort wrote:
>> To me this is wrong to not apply such a scaling by n_samples. To
>> motivate this, just look
On Thu, Nov 10, 2011 at 09:58:16PM -0500, Alexandre Gramfort wrote:
> To me this is wrong to not apply such a scaling by n_samples. To
> motivate this, just look as the gist and you will see that if you don't
> do it then C / alpha needs to be changed if you duplicate every sample.
> This is partic
Hi all,
I realized today that not all models scale the regularization
parameter (C or alpha)
with the number of samples so they minimize during fit a cost function
of the form:
1/n_samples \sum_i loss(x_i, y_i) + alpha \| ... \|_x
or
C/n_samples \sum_i loss(x_i, y_i) + \| ... \|_x
Apparently l