Github user wzhfy commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16696#discussion_r104280554
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
 ---
    @@ -773,14 +773,20 @@ case class LocalLimit(limitExpr: Expression, child: 
LogicalPlan) extends UnaryNo
       }
       override def computeStats(conf: CatalystConf): Statistics = {
         val limit = limitExpr.eval().asInstanceOf[Int]
    -    val sizeInBytes = if (limit == 0) {
    +    val childStats = child.stats(conf)
    +    if (limit == 0) {
           // sizeInBytes can't be zero, or sizeInBytes of BinaryNode will also 
be zero
           // (product of children).
    -      1
    +      Statistics(
    +        sizeInBytes = 1,
    +        rowCount = Some(0),
    +        isBroadcastable = childStats.isBroadcastable)
         } else {
    -      (limit: Long) * output.map(a => a.dataType.defaultSize).sum
    +      // The output row count of LocalLimit should be the sum of row count 
from each partition, but
    +      // since the partition number is not available here, we just use 
statistics of the child
    +      // except column stats, because we don't know the distribution after 
a limit operation
    --- End diff --
    
    How can we make sure max/min values are still there after limit? Otherwise 
it will be a very loose bound of max/min.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to