mgaido91 commented on a change in pull request #22957: [SPARK-25951][SQL]
Ignore aliases for distributions and orderings
URL: https://github.com/apache/spark/pull/22957#discussion_r255425400
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
##########
@@ -284,6 +298,19 @@ case class RangePartitioning(ordering: Seq[SortOrder],
numPartitions: Int)
}
}
}
+
+ override private[spark] def pruneInvalidAttribute(invalidAttr: Attribute):
Partitioning = {
+ if (this.references.contains(invalidAttr)) {
+ val validExprs =
this.children.takeWhile(!_.references.contains(invalidAttr))
+ if (validExprs.isEmpty) {
+ UnknownPartitioning(numPartitions)
+ } else {
+ RangePartitioning(validExprs, numPartitions)
Review comment:
mmmh, I am not sure about what you mean referring to the classdoc of these 2
classes. I see nothing there about this. Anyway, I see that the implementation
is done according what you state, but I do believe that it is wrong (or at
least suboptimal if you prefer). If the data is partitioned by sorting it with
`a.ASC, b.ASC`, it is definitely partitioned by sorting it with `a.ASC`. I
think that `forall` should be an `exists`. There is also a (very minor) bug in
the current implementation; try and running this test (it fails...):
```
test("partitioning test") {
val attr1 = AttributeReference("attr1", IntegerType)()
val attr2 = AttributeReference("attr2", IntegerType)()
val partitioning = RangePartitioning(Seq.empty, 10)
val requiredDistribution = ClusteredDistribution(Seq(attr2, attr1),
Some(10))
assert(!partitioning.satisfies(requiredDistribution))
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]