Le 6 avril 2012 16:32, Immanuel B <[email protected]> a écrit :
> Hey Alex,
>> a bonus you could add is logistic regression using L1 + L2. as well as
>> the support of ElasticNet (also L1 + L2) using the Lars algorithm.
> I'm somewhat lost, can you be more specific? Are you referring to strong rule
> support?

No LARS is another way to solve the LASSO regression problem that is
distinct from the Coordinate Descent method (and from the Stochastic
Gradient Descent method too).

The current implementation of LARS in scikit-learn only support the
LASSO penalty (L1). It could be extended to support the ElasticNet
penalty as well (L1 + L2).

Also Lars is another scikit-learn developer, but that's completely unrelated :P

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to