srowen commented on a change in pull request #26454: [SPARK-29818][MLLIB]
Missing persist on RDD
URL: https://github.com/apache/spark/pull/26454#discussion_r345924649
##########
File path:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/BinaryClassificationMetrics.scala
##########
@@ -165,13 +166,17 @@ class BinaryClassificationMetrics @Since("3.0.0") (
confusions: RDD[(Double, BinaryConfusionMatrix)]) = {
// Create a bin for each distinct score value, count weighted positives and
// negatives within each bin, and then sort by score values in descending
order.
- val counts = scoreLabelsWeight.combineByKey(
+ val binnedWeights = scoreLabelsWeight.combineByKey(
createCombiner = (labelAndWeight: (Double, Double)) =>
new BinaryLabelCounter(0.0, 0.0) += (labelAndWeight._1,
labelAndWeight._2),
mergeValue = (c: BinaryLabelCounter, labelAndWeight: (Double, Double)) =>
c += (labelAndWeight._1, labelAndWeight._2),
mergeCombiners = (c1: BinaryLabelCounter, c2: BinaryLabelCounter) => c1
+= c2
- ).sortByKey(ascending = false)
+ )
+ if (scoreLabelsWeight.getStorageLevel != StorageLevel.NONE) {
+ binnedWeights.persist()
+ }
+ val counts = binnedWeights.sortByKey(ascending = false)
Review comment:
Doesn't seem so. But that is the question I'd put to you in these cases -
are you sure it makes a difference meaningful enough to overcome the overhead?
I could imagine so here, just wondering if these are based on more
investigation or benchmarking, vs just trying to persist lots of things.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]