It may have to do with the different scaling behaviors that these two types
of penalties show with number of samples. I remember there being
investigations into this for the respective sklearn classifiers, but I
don't know the result, nor literature to back this up.

On Wed, Jul 22, 2015 at 3:58 PM, Doaa Altarawy <daltar...@vt.edu> wrote:

> Thanks, that's looks the most suitable solution if I'll multiply the
> weights in both L1 and L2.
>
> Currently the paper I use is following the Adaptive elastic net, that is
> the weights are multiplied only in the Lasso penalty. I don't know what is
> the effect if the weights are multiplied in the Ridge penalty too.
> I revised the paper but they didn't say why it is multiplied in L1 only
> but not in L2.
>
> Any ideas about that?
>
> --
> Doaa Altarawy,
> PhD Student, Computer Science,
> Virginia Tech, USA.
>
>
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to