sunchao commented on a change in pull request #35574:
URL: https://github.com/apache/spark/pull/35574#discussion_r812563354
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
##########
@@ -56,7 +57,23 @@ case class EnsureRequirements(
// Ensure that the operator's children satisfy their output distribution
requirements.
var children = originalChildren.zip(requiredChildDistributions).map {
case (child, distribution) if
child.outputPartitioning.satisfies(distribution) =>
- child
+ (child.outputPartitioning, distribution) match {
+ case (p: HashPartitioning, d: ClusteredDistribution) =>
+ if
(conf.getConf(SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_SOLE_PARTITION) &&
+ requiredChildDistributions.size == 1 &&
!p.isPartitionedOnFullKeys(d)) {
+ // Add an extra shuffle for `ClusteredDistribution` even though
its child
Review comment:
> If we want to achieve such plans, I think we can (should?) still
capture such things in requirements and satisfies, instead of baking
complexities into EnsureRequirements or yet-another physical rule. E.g,
Yes agreed. It's better to avoid case analysis in `EnsureRequirements` or
some other rules.
On the augmented `ClusteredDistribution`, I'm not sure whether we'll need
`containing` and `disallowedCombinations` for broader scenarios though, or it
is sufficient to just consider simple knobs like
`spark.sql.requireAllClusterKeysForHashPartition` for the majority use cases.
But as you demonstrated, I think we can just keep `ClusteredDistribution`,
evolve it and make it more expressive if there're strong requirements in future.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]