ulysses-you commented on code in PR #44661:
URL: https://github.com/apache/spark/pull/44661#discussion_r1461281654
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/CoalesceShufflePartitions.scala:
##########
@@ -146,13 +147,16 @@ case class CoalesceShufflePartitions(session:
SparkSession) extends AQEShuffleRe
Seq(collectShuffleStageInfos(r))
case unary: UnaryExecNode => collectCoalesceGroups(unary.child)
case union: UnionExec => union.children.flatMap(collectCoalesceGroups)
- // If not all leaf nodes are exchange query stages, it's not safe to
reduce the number of
- // shuffle partitions, because we may break the assumption that all
children of a spark plan
- // have same number of output partitions.
+ case join: CartesianProductExec =>
join.children.flatMap(collectCoalesceGroups)
Review Comment:
The issue is that, if the plan itself does not require shuffle exchange,
then we should not assume all of its leaf nodes are shuffle exchanges. How
about adding a new pattern to make the non-unary plan which does not require
shuffle into new groups. We can even remore the union pattern.
```
def doNotRequireShuffleExchange(plan) =
plan.requiredChildDistribution.exists {
case _: BroadcastDistribution | _: UnspecifiedDistribution => true
case _ => false
}
case p if doNotRequireShuffleExchange(p) =>
p.children.flatMap(collectCoalesceGroups)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]