wangyum commented on a change in pull request #26409: [SPARK-29655][SQL] Read
bucketed tables obeys spark.sql.shuffle.partitions
URL: https://github.com/apache/spark/pull/26409#discussion_r345714568
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
##########
@@ -83,7 +83,20 @@ case class EnsureRequirements(conf: SQLConf) extends
Rule[SparkPlan] {
numPartitionsSet.headOption
}
- val targetNumPartitions =
requiredNumPartitions.getOrElse(childrenNumPartitions.max)
+ // Read bucketed tables always obeys numShufflePartitions because
maxNumPostShufflePartitions
+ // is usually much larger than numShufflePartitions,
+ // which causes some bucket map join lose efficacy after enabling
adaptive execution.
+ val nonShuffleChildrenNumPartitions =
+
childrenIndexes.map(children).filterNot(_.isInstanceOf[ShuffleExchangeExec])
+ .map(_.outputPartitioning.numPartitions)
+ val expectedChildrenNumPartitions = if
(nonShuffleChildrenNumPartitions.nonEmpty &&
+ conf.maxNumPostShufflePartitions > conf.numShufflePartitions) {
Review comment:
@cloud-fan @viirya I add `conf.maxNumPostShufflePartitions >
conf.numShufflePartitions` to fix test error:
```
org.apache.spark.sql.execution.ReduceNumShufflePartitionsSuite.determining
the number of reducers: plan already partitioned(minNumPostShufflePartitions: 5)
org.apache.spark.sql.execution.ReduceNumShufflePartitionsSuite.determining
the number of reducers: plan already partitioned
```
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/113679/testReport/
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]