yushesp commented on code in PR #53375:
URL: https://github.com/apache/spark/pull/53375#discussion_r2596674671
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala:
##########
@@ -676,6 +676,29 @@ case class ShufflePartitionIdPassThrough(
copy(expr = newChildren.head.asInstanceOf[DirectShufflePartitionID])
}
+/**
+ * Represents a partitioning where rows are distributed using a custom
[[Partitioner]].
+ *
+ * The key extraction function is applied to deserialize each row and extract
a key,
+ * which is then passed to the partitioner to determine the target partition.
+ */
+case class CustomFunctionPartitioning(
Review Comment:
Thanks for flagging, I wasn’t aware of #52153 when I put this together. Just
read through it.
It looks like repartitionById covers cases where the partition logic can be
expressed as a column expression, which handles a lot of use cases cleanly.
The gap I was thinking about is reusing existing Partitioner implementations
from RDD codebases, or cases where the logic is complex enough that
encapsulating it in a testable class is preferable to inline expressions. But I
can see an argument that those are niche enough that repartitionById is
sufficient.
Curious whether there’s appetite for supporting both patterns or if the
consensus is that this isn’t needed. Happy to close if so.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]