Hi all,

Following a conversation on irc, I would like to suggest the
non-negative least squares algorithm for sklearn.  This much older
method has some advantages over l1 regularisation (lasso) when the
signs of the terms are known.  This is particularly the case when
sparse recovery is required.  One particularly large advantage is that
no parameters are needed for its use but you may also get sparser
results than lasso gives by forcing non-negativity.

This recent paper http://arxiv.org/abs/1205.0953 summarises some of
the benefits much better than I could.

Best wishes,
Raphael

------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to