Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/20844#discussion_r176307376
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala
---
@@ -348,6 +348,13 @@ case class RangeExec(range:
org.apache.spark.sql.catalyst.plans.logical.Range)
override lazy val metrics = Map(
"numOutputRows" -> SQLMetrics.createMetric(sparkContext, "number of
output rows"))
+ /** Specifies how data is partitioned across different nodes in the
cluster. */
+ override def outputPartitioning: Partitioning = if (numSlices == 1 &&
numElements != 0) {
--- End diff --
why `numElements != 0`?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]