On 08/20/2012 07:31 PM, Mathieu Blondel wrote:
> With L1-regularization, for a given value of the regularization 
> parameter alpha, you get a certain number of non-zero coefficients. 
> Lars works by automatically computing the decrease in alpha that you 
> need to increment the number of non-zero coefficient by 1. That's why 
> lars_path returns the alphas, one for each feature added.
Ok, makes sense. Thanks for the explanation.
But now I don't understand what the difference between LarsCV and 
LassoLarsCV is.

Sorry don't have my ESL with me :-/

We should add that to the docs. There is no real explanation of LarsCV 
there, right? Probably it is at lars_path?
Both lars_path and LarsCV (and the other XCV) should be added to the 
narrative in the linear model section.

Would anyone with ESL close by and / or a good knowledge of these models 
volunteer?


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to