jackylee-ch commented on code in PR #44661:
URL: https://github.com/apache/spark/pull/44661#discussion_r1453489448


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/CoalesceShufflePartitions.scala:
##########
@@ -146,13 +147,15 @@ case class CoalesceShufflePartitions(session: 
SparkSession) extends AQEShuffleRe
       Seq(collectShuffleStageInfos(r))
     case unary: UnaryExecNode => collectCoalesceGroups(unary.child)
     case union: UnionExec => union.children.flatMap(collectCoalesceGroups)
-    // If not all leaf nodes are exchange query stages, it's not safe to 
reduce the number of
-    // shuffle partitions, because we may break the assumption that all 
children of a spark plan
-    // have same number of output partitions.
     // Note that, `BroadcastQueryStageExec` is a valid case:
     // If a join has been optimized from shuffled join to broadcast join, then 
the one side is
     // `BroadcastQueryStageExec` and other side is `ShuffleQueryStageExec`. It 
can coalesce the
     // shuffle side as we do not expect broadcast exchange has same partition 
number.
+    case join: BroadcastHashJoinExec => 
join.children.flatMap(collectCoalesceGroups)
+    case join: BroadcastNestedLoopJoinExec => 
join.children.flatMap(collectCoalesceGroups)

Review Comment:
   The assertion in 
[ShufflePartitionsutils.coalescePartitionsWithSkew](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/ShufflePartitionsUtil.scala#L131)
 ensures that there are no unexpected partition specifications and that the 
start indices are identical across all different shuffles. However, the 
assertion is broken when a union operation is involved. To address this issue, 
we split the union into groups at 
[collectCoalesceGroups](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/CoalesceShufflePartitions.scala#L144)
 to ensure that each group adheres to this assertion.
   ‎
   The problem we encountered is that if a query contains a union before a 
broadcast join, all sub-plans are directly grouped together when the program 
reaches the broadcast join. If the sub-plans of the union have different 
shuffle partition numbers, an AssertionError will occur.
   
   Since the shuffle partition number for each side of broadcastJoin are not 
relevant, we can disregard the broadcastJoin and proceed to find the union 
after it and then split the plans into groups.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to