Github user holdenk commented on a diff in the pull request:
    --- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
    @@ -396,7 +396,16 @@ abstract class RDD[T: ClassTag](
        * Return a new RDD containing the distinct elements in this RDD.
       def distinct(numPartitions: Int)(implicit ord: Ordering[T] = null): 
RDD[T] = withScope {
    -    map(x => (x, null)).reduceByKey((x, y) => x, numPartitions).map(_._1)
    +    // If the data is already approriately partitioned with a known 
partitioner we can work locally.
    +    def removeDuplicatesInPartition(itr: Iterator[T]): Iterator[T] = {
    +      val set = new mutable.HashSet[T]()
    +      itr.filter(set.add(_))
    --- End diff --
    So according to HashSet can only contain one instance for each element so 
we don't need to worry about adding multiple copies of the elements.


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to