sunchao commented on a change in pull request #35574:
URL: https://github.com/apache/spark/pull/35574#discussion_r813495365



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
##########
@@ -88,6 +93,19 @@ case class ClusteredDistribution(
         s"the actual number of partitions is $numPartitions.")
     HashPartitioning(clustering, numPartitions)
   }
+
+  /**
+   * Checks if `expressions` match all `clustering` expressions in the same 
ordering.
+   *
+   * `Partitioning` should call this to check its expressions when 
`requiredAllClusterKeys`
+   * is set to true.
+   */
+  def areAllClusterKeysMatched(expressions: Seq[Expression]): Boolean = {
+    expressions.length == clustering.length &&
+      expressions.zip(clustering).forall {

Review comment:
       For aggregate or window, I'm not sure whether we have any reordering 
mechanism similar to join. If not, this could be very limited? for instance if 
users have `group by x, y, z` while the data distribution is `y, z, x`, then 
they have to rewrite the queries to match the distribution.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to