The title says it all: the last modification to the precision-recall
implies that if the second argument to the precision-recall function is
not bettwen 0 and 1, the code gives non-sensical results without a
warning. It used to work.

I realise that the argument is called 'probas_pred', but let's face it,
it can be anything that is an increasing function of the confidence to
have a detection, in other words a test statistics. A comon use case is
the plug in the output of a decision_function, as with SVMs, or with an
F-score.

I suggest to change it back to working with any non-bounded test
statistic. Any reason not to? I am proposing to do the work.

Cheers,

Gaƫl

PS: I've been overhauling a code that was written last February, using
the scikit-learn to work with the latest version, and its broken in many
subtle ways (like the one that I am mentionning in this email) due to
subtle changes in behavior in the scikit. :( Will send pull requests for
all.

------------------------------------------------------------------------------
The Windows 8 Center 
In partnership with Sourceforge
Your idea - your app - 30 days. Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to