cloud-fan commented on a change in pull request #32084:
URL: https://github.com/apache/spark/pull/32084#discussion_r610655569
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/CoalesceShufflePartitions.scala
##########
@@ -35,14 +35,25 @@ case class CoalesceShufflePartitions(session: SparkSession)
extends CustomShuffl
if (!conf.coalesceShufflePartitionsEnabled) {
return plan
}
- if (!plan.collectLeaves().forall(_.isInstanceOf[QueryStageExec])
- || plan.find(_.isInstanceOf[CustomShuffleReaderExec]).isDefined) {
- // If not all leaf nodes are query stages, it's not safe to reduce the
number of
- // shuffle partitions, because we may break the assumption that all
children of a spark plan
- // have same number of output partitions.
- return plan
+
+ if (canCoalescePartitions(plan)) {
Review comment:
Can we make this rule more general? Ideally we need to split the query
plan of the given query stage into several groups. The shuffles within one
group must have the same number of partitions.
This rule is overly simplified right now and assumes the entire query stage
is one group. It's definitely wrong with Union and we should fix it. To be
conservative we can try to put everything in one group except for Union. We
should make the code extensible so that we can add more "points" that can split
groups.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]