Github user adrian-ionescu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19828#discussion_r153409247
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -838,6 +839,27 @@ case class RepartitionByExpression(
require(numPartitions > 0, s"Number of partitions ($numPartitions) must
be positive.")
+ require(partitionExpressions.nonEmpty, "At least one partition-by
expression must be specified.")
+
+ val partitioning: Partitioning = {
+ val (sortOrder, nonSortOrder) =
partitionExpressions.partition(_.isInstanceOf[SortOrder])
--- End diff --
It's going to follow the `HashPartitioning` path and eventually lead to a
"Cannot evaluate expression" exception, just like it would presently do if you
tried running `df.repartition($"col".asc +1)` or `df.sort($"col".asc + 1)`
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]