peter-toth commented on a change in pull request #25479:
[SPARK-28356][SHUFFLE][FOLLOWUP] Fix case with different pre-shuffle partition
numbers
URL: https://github.com/apache/spark/pull/25479#discussion_r314940517
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/ReduceNumShufflePartitions.scala
##########
@@ -82,7 +82,12 @@ case class ReduceNumShufflePartitions(conf: SQLConf)
extends Rule[SparkPlan] {
// `ShuffleQueryStageExec` gives null mapOutputStatistics when the input
RDD has 0 partitions,
// we should skip it when calculating the `partitionStartIndices`.
val validMetrics = shuffleMetrics.filter(_ != null)
- if (validMetrics.nonEmpty) {
+ // We may have different pre-shuffle partition numbers, don't reduce
shuffle partition number
+ // in that case. For example when we union fully aggregated data (data
is arranged to a single
+ // partition) and a result of a SortMergeJoin (multiple partitions).
+ val distinctNumPreShufflePartitions =
+ validMetrics.map(stats => stats.bytesByPartitionId.length).distinct
+ if (validMetrics.nonEmpty && distinctNumPreShufflePartitions.length ==
1) {
Review comment:
Yes, we could remove it, but the assert has been there since the original
version of `ReduceNumShufflePartitions` where the
`distinctNumPreShufflePartitions.length == 1` check was also included. I'm not
sure what is the plan with `ReduceNumShufflePartitions`. @carsonwang,
@maryannxue do you want to improve `Union`/`SinglePartition` handling in this
rule? Shall we remove the assert?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]