sigmod commented on a change in pull request #35574:
URL: https://github.com/apache/spark/pull/35574#discussion_r812484191



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
##########
@@ -56,7 +57,23 @@ case class EnsureRequirements(
     // Ensure that the operator's children satisfy their output distribution 
requirements.
     var children = originalChildren.zip(requiredChildDistributions).map {
       case (child, distribution) if 
child.outputPartitioning.satisfies(distribution) =>
-        child
+        (child.outputPartitioning, distribution) match {
+          case (p: HashPartitioning, d: ClusteredDistribution) =>
+            if 
(conf.getConf(SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_SOLE_PARTITION) &&
+              requiredChildDistributions.size == 1 && 
!p.isPartitionedOnFullKeys(d)) {
+              // Add an extra shuffle for `ClusteredDistribution` even though 
its child

Review comment:
       >> Alternatively I think this can be done as well, in a separate 
physical plan rule post EnsureRequirements
   
   Does the rule create correctness issue? 
   The rule changed the input shuffle of `w` without re-considering the 
downstream operators/shuffles of `w`, such that the distribution requirement of 
a downstream operator of `w` may actually not be met with the newly injected 
ShuffleExchangeExec.
   
   I think shuffle planning should better only be done in `EnsureRequirements`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to