szehon-ho commented on code in PR #45267:
URL: https://github.com/apache/spark/pull/45267#discussion_r1506339765
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala:
##########
@@ -635,6 +636,22 @@ trait ShuffleSpec {
*/
def createPartitioning(clustering: Seq[Expression]): Partitioning =
throw SparkUnsupportedOperationException()
+
+ /**
+ * Return a set of [[Reducer]] for the partition expressions of this shuffle
spec,
+ * on the partition expressions of another shuffle spec.
+ * <p>
+ * A [[Reducer]] exists for a partition expression function of this shuffle
spec if it is
+ * 'reducible' on the corresponding partition expression function of the
other shuffle spec.
+ * <p>
+ * If a value is returned, there must be one Option[[Reducer]] per partition
expression.
+ * A None value in the set indicates that the particular partition
expression is not reducible
+ * on the corresponding expression on the other shuffle spec.
+ * <p>
+ * Returning none also indicates that none of the partition expressions can
be reduced on the
+ * corresponding expression on the other shuffle spec.
+ */
+ def reducers(spec: ShuffleSpec): Option[Seq[Option[Reducer[_]]]] = None
Review Comment:
Hi @advancedxy yea, it is definitely increases the scope to make a generic
mechanism. But there are actually many, the main example is that even Iceberg
bucket function is not the same as Spark's and would need to implement this
somehow, but obviously anything the user registers in the function catalog.
For instance, geo bucketing functions as well.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]