sigmod commented on a change in pull request #35574:
URL: https://github.com/apache/spark/pull/35574#discussion_r812542974



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
##########
@@ -56,7 +57,23 @@ case class EnsureRequirements(
     // Ensure that the operator's children satisfy their output distribution 
requirements.
     var children = originalChildren.zip(requiredChildDistributions).map {
       case (child, distribution) if 
child.outputPartitioning.satisfies(distribution) =>
-        child
+        (child.outputPartitioning, distribution) match {
+          case (p: HashPartitioning, d: ClusteredDistribution) =>
+            if 
(conf.getConf(SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_SOLE_PARTITION) &&
+              requiredChildDistributions.size == 1 && 
!p.isPartitionedOnFullKeys(d)) {
+              // Add an extra shuffle for `ClusteredDistribution` even though 
its child

Review comment:
       > So in the end, it seems like this is not only expressing requirements 
but also doing some sort of matching
   
   If we want to achieve such plans, I think we can (should?) still capture 
such things in requirements and `satisfies`, instead of baking complexities 
into `EnsureRequirements` or yet-another physical rule. E.g,
   
   ```
   case class ClusteredDistribution(
       clustering: Seq[Expression],
       containing: Set[Expression],  /* satisfied partitioning must contain 
those expressions */
       disallowedCombinations: Set(Set[Expression]), /* disallow those 
combinations */
       requiredNumPartitions: Option[Int] = None) 
   ```
   
   where the latter two specify what subset of possibilities are considered. 
There may be other ways to express the same thing. Then, @sunchao, the 
requirement to capture what you want looks like:
    ClusteredDistribution({x, y, z}, {}, {{x}}) 
   
   If we have that, HashClusteredDistribution is just a special case and not 
needed, because 
   HashClusteredDistribution({x, y, z}) = ClusteredDistribution({x, y, z},  {x, 
y, z}, {})
   
   Without this augmented ClusteredDistribution,  HashClusteredDistribution is 
the only thing we can play with.  
   That being said, we need more expressive power for ClusteredDistribution, 
either more fields or the previous HashClusteredDistribution :-) 
   
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to