Github user sddyljsx commented on a diff in the pull request:
https://github.com/apache/spark/pull/21859#discussion_r209418058
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/ShuffleExchangeExec.scala
---
@@ -294,7 +296,12 @@ object ShuffleExchangeExec {
sorter.sort(iter.asInstanceOf[Iterator[UnsafeRow]])
}
} else {
- rdd
+ part match {
+ case partitioner: RangePartitioner[InternalRow @unchecked, _]
+ if partitioner.getSampledArray != null =>
+ sparkContext.parallelize(partitioner.getSampledArray.toSeq,
rdd.getNumPartitions)
--- End diff --
```
newRdd.mapPartitionsInternal { iter =>
val getPartitionKey = getPartitionKeyExtractor()
val mutablePair = new MutablePair[Int, InternalRow]()
iter.map { row =>
mutablePair.update(part.getPartition(getPartitionKey(row)), row) }
```
the newRdd uses the partitioner actually. It will map row to (partitionId,
row) for the further shuffle
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]