[
https://issues.apache.org/jira/browse/SPARK-21806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137077#comment-16137077
]
Sean Owen commented on SPARK-21806:
-----------------------------------
I implemented this and observed a few more things:
(0, p) should be prepended, for some value of p, even if there is already a
value on the curve for recall 0. This corresponds to setting the threshold
higher than the maximum score produced by the classifier.
If there's already a value on the curve for recall 0, then the point that gets
prepended doesn't affect the area, of course. It does matter in other cases.
Elsewhere Spark code defines precision as 1 when there are no positive
classifications whatsoever.
I still favor the change but wouldn't mind hearing other feedback, from say
[~mlnick] or [~josephkb] for https://github.com/apache/spark/pull/3118#r19986236
> BinaryClassificationMetrics pr(): first point (0.0, 1.0) is misleading
> ----------------------------------------------------------------------
>
> Key: SPARK-21806
> URL: https://issues.apache.org/jira/browse/SPARK-21806
> Project: Spark
> Issue Type: Improvement
> Components: MLlib
> Affects Versions: 2.2.0
> Reporter: Marc Kaminski
> Priority: Minor
> Attachments: PRROC_example.jpeg
>
>
> I would like to reference to a [discussion in scikit-learn|
> https://github.com/scikit-learn/scikit-learn/issues/4223], as this behavior
> is probably based on the scikit implementation.
> Summary:
> Currently, the y-axis intercept of the precision recall curve is set to (0.0,
> 1.0). This behavior is not ideal in certain edge cases (see example below)
> and can also have an impact on cross validation, when optimization metric is
> set to "areaUnderPR".
> Please consider [blucena's
> post|https://github.com/scikit-learn/scikit-learn/issues/4223#issuecomment-215273613]
> for possible alternatives.
> Edge case example:
> Consider a bad classifier, that assigns a high probability to all samples. A
> possible output might look like this:
> ||Real label || Score ||
> |1.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 1.0 |
> |0.0 | 0.95 |
> |0.0 | 0.95 |
> |1.0 | 1.0 |
> This results in the following pr points (first line set by default):
> ||Threshold || Recall ||Precision ||
> |1.0 | 0.0 | 1.0 |
> |0.95| 1.0 | 0.2 |
> |0.0| 1.0 | 0,16 |
> The auPRC would be around 0.6. Classifiers with a more differentiated
> probability assignment will be falsely assumed to perform worse in regard to
> this auPRC.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]