To follow up the below, I notice that scipy implements nnls by calling
some presumably open source fortran code. See
https://github.com/scipy/scipy/blob/v0.11.0/scipy/optimize/nnls.py#L7
.

Maybe sklearn could do the same thing?  I don't fully understand the
issues about duplication of effort between scipy and sklearn I have to
admit.

Raphael

On 7 October 2012 19:43, Raphael Clifford <[email protected]> wrote:
> Hi all,
>
> Following a conversation on irc, I would like to suggest the
> non-negative least squares algorithm for sklearn.  This much older
> method has some advantages over l1 regularisation (lasso) when the
> signs of the terms are known.  This is particularly the case when
> sparse recovery is required.  One particularly large advantage is that
> no parameters are needed for its use but you may also get sparser
> results than lasso gives by forcing non-negativity.
>
> This recent paper http://arxiv.org/abs/1205.0953 summarises some of
> the benefits much better than I could.
>
> Best wishes,
> Raphael

------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to