viirya commented on a change in pull request #25479:
[SPARK-28356][SHUFFLE][FOLLOWUP] Fix case with different pre-shuffle partition
numbers
URL: https://github.com/apache/spark/pull/25479#discussion_r314882775
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/ReduceNumShufflePartitions.scala
##########
@@ -82,7 +82,12 @@ case class ReduceNumShufflePartitions(conf: SQLConf)
extends Rule[SparkPlan] {
// `ShuffleQueryStageExec` gives null mapOutputStatistics when the input
RDD has 0 partitions,
// we should skip it when calculating the `partitionStartIndices`.
val validMetrics = shuffleMetrics.filter(_ != null)
- if (validMetrics.nonEmpty) {
+ // We may have different pre-shuffle partition numbers, don't reduce
shuffle partition number
+ // in that case. For example when we union fully aggregated data (data
is arranged to a single
+ // partition) and a result of a SortMergeJoin (multiple partitions).
+ val distinctNumPreShufflePartitions =
+ validMetrics.map(stats => stats.bytesByPartitionId.length).distinct
+ if (validMetrics.nonEmpty && distinctNumPreShufflePartitions.length ==
1) {
Review comment:
If we have this condition `distinctNumPreShufflePartitions.length == 1`, why
do we need the assert at L136? Shall we remove the assert?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]