srowen commented on a change in pull request #17697: [SPARK-20414][MLLIB] avoid 
creating only 16 reducers when calling topByKey()
URL: https://github.com/apache/spark/pull/17697#discussion_r290743031
 
 

 ##########
 File path: 
mllib/src/main/scala/org/apache/spark/mllib/rdd/MLPairRDDFunctions.scala
 ##########
 @@ -47,7 +47,18 @@ class MLPairRDDFunctions[K: ClassTag, V: ClassTag](self: 
RDD[(K, V)]) extends Se
       combOp = (queue1, queue2) => {
         queue1 ++= queue2
       }
-    ).mapValues(_.toArray.sorted(ord.reverse))  // This is a min-heap, so we 
reverse the order.
+    ).mapValues(_.toArray.sorted(ord.reverse)) // This is a min-heap, so we 
reverse the order.
+  }
+
+  def topByKey(num: Int, bucketsCount: Int)(implicit ord: Ordering[V]): 
RDD[(K, Array[V])] = {
 
 Review comment:
   You can just keep one method with a new optional Int argument. Its default 
can be `Partitioner.defaultPartitioner(self)` This new method isn't used though?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to