sunchao commented on a change in pull request #35657:
URL: https://github.com/apache/spark/pull/35657#discussion_r829332903



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
##########
@@ -305,6 +306,63 @@ case class HashPartitioning(expressions: Seq[Expression], 
numPartitions: Int)
     newChildren: IndexedSeq[Expression]): HashPartitioning = copy(expressions 
= newChildren)
 }
 
+/**
+ * Represents a partitioning where rows are split across partitions based on 
transforms defined
+ * by `expressions`. `partitionValuesOpt`, if defined, should contain value of 
partition key(s) in
+ * ascending order, after evaluated by the transforms in `expressions`, for 
each input partition.
+ * In addition, its length must be the same as the number of input partitions 
(and thus is a 1-1
+ * mapping), and each row in `partitionValuesOpt` must be unique.
+ *
+ * For example, if `expressions` is `[years(ts_col)]`, then a valid value of 
`partitionValuesOpt` is
+ * `[0, 1, 2]`, which represents 3 input partitions with distinct partition 
values. All rows
+ * in each partition have the same value for column `ts_col` (which is of 
timestamp type), after
+ * being applied by the `years` transform.
+ *
+ * On the other hand, `[0, 0, 1]` is not a valid value for 
`partitionValuesOpt` since `0` is
+ * duplicated twice.
+ *
+ * @param expressions partition expressions for the partitioning.
+ * @param numPartitions the number of partitions
+ * @param partitionValuesOpt if set, the values for the cluster keys of the 
distribution, must be
+ *                           in ascending order.
+ */
+case class KeyGroupedPartitioning(
+    expressions: Seq[Expression],
+    numPartitions: Int,
+    partitionValuesOpt: Option[Seq[InternalRow]] = None) extends Partitioning {
+
+  override def satisfies0(required: Distribution): Boolean = {
+    super.satisfies0(required) || {
+      required match {
+        case c @ ClusteredDistribution(requiredClustering, 
requireAllClusterKeys, _) =>
+          if (requireAllClusterKeys) {
+            // Checks whether this partitioning is partitioned on exactly same 
clustering keys of
+            // `ClusteredDistribution`.
+            c.areAllClusterKeysMatched(expressions)
+          } else {
+            // We'll need to find leaf attributes from the partition 
expressions first.
+            val attributes = expressions.flatMap(_.collectLeaves())
+            attributes.forall(x => 
requiredClustering.exists(_.semanticEquals(x)))

Review comment:
       For simplicity this PR only support transforms with a single argument, 
so the check here is very similar to `HashPartitioning`. I plan to work on the 
support of multi-arguments as a separate PR.
   
   > I think the theory is, as long as the transform function is deterministic, 
if two rows have the same value of [a, b, c], the calculated key values [f1(a, 
b), f2(b, c)] are also the same.
   
   Yea agree with this. It may be more complicated if you have duplicated keys 
in distribution or/and partitioning though. We also need to think how to 
implement `EnsureRequirements.reorderJoinPredicates` for this case.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to