Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22010#discussion_r214103667
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -396,7 +396,16 @@ abstract class RDD[T: ClassTag](
* Return a new RDD containing the distinct elements in this RDD.
*/
def distinct(numPartitions: Int)(implicit ord: Ordering[T] = null):
RDD[T] = withScope {
- map(x => (x, null)).reduceByKey((x, y) => x, numPartitions).map(_._1)
+ // If the data is already approriately partitioned with a known
partitioner we can work locally.
+ def removeDuplicatesInPartition(itr: Iterator[T]): Iterator[T] = {
+ val set = new mutable.HashSet[T]()
+ itr.filter(set.add(_))
--- End diff --
This is a bad implementation and could OOM. You should reuse reduceByKey.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]