Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/5208#discussion_r28084180
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
---
@@ -163,6 +178,40 @@ case class HashPartitioning(expressions:
Seq[Expression], numPartitions: Int)
}
/**
+ * Represents a partitioning where rows are split up across partitions
based on the hash
+ * of `expressions`. All rows where `expressions` evaluate to the same
values are guaranteed to be
+ * in the same partition. And rows within the same partition are sorted by
the expressions.
+ */
+case class HashSortedPartitioning(expressions: Seq[Expression],
numPartitions: Int)
+ extends Expression
+ with Partitioning {
+
+ override def children = expressions
+ override def nullable = false
+ override def dataType = IntegerType
+
+ private[this] lazy val clusteringSet = expressions.toSet
+
+ override def satisfies(required: Distribution): Boolean = required match
{
+ case UnspecifiedDistribution => true
+ case ClusteredOrderedDistribution(requiredClustering) =>
+ clusteringSet.subsetOf(requiredClustering.toSet)
--- End diff --
For example, if a `SparkPlan`'s `outputPartitioning` is a
`HashSortedPartitioning` of `(c1)` and the required distribution is a
`ClusteredOrderedDistribution` of `(c3, c2, c1)`, we do not need to add an
`Exchange`. However, we need to do sort operation for every partition.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]