Github user mshtelma commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21052#discussion_r181418993
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
 ---
    @@ -395,27 +395,28 @@ case class FilterEstimation(plan: Filter) extends 
Logging {
         // use [min, max] to filter the original hSet
         dataType match {
           case _: NumericType | BooleanType | DateType | TimestampType =>
    -        val statsInterval =
    -          ValueInterval(colStat.min, colStat.max, 
dataType).asInstanceOf[NumericValueInterval]
    -        val validQuerySet = hSet.filter { v =>
    -          v != null && statsInterval.contains(Literal(v, dataType))
    -        }
    +        if (colStat.min.isDefined && colStat.max.isDefined) {
    --- End diff --
    
    Yes, I have removes the bigger if, and implemented all three checks with 
one small if


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to