Github user ConeyLiu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20844#discussion_r176327636
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala
 ---
    @@ -348,6 +348,13 @@ case class RangeExec(range: 
org.apache.spark.sql.catalyst.plans.logical.Range)
       override lazy val metrics = Map(
         "numOutputRows" -> SQLMetrics.createMetric(sparkContext, "number of 
output rows"))
     
    +  /** Specifies how data is partitioned across different nodes in the 
cluster. */
    +  override def outputPartitioning: Partitioning = if (numSlices == 1 && 
numElements != 0) {
    --- End diff --
    
    This related to the [UT 
error](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/88474/testReport/org.apache.spark.sql/DataFrameRangeSuite/SPARK_7150_range_api/).
 `spark.range(-10, -9, -20, 1).count()` faild when `codegen` set to true and 
`RangeExec.outputPartitioning' set to `SinglePartition`. I try to found the 
root reason, but failed.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to