Github user wzhfy commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19783#discussion_r155690722
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
 ---
    @@ -332,8 +332,41 @@ case class FilterEstimation(plan: Filter) extends 
Logging {
             colStatsMap.update(attr, newStats)
           }
     
    -      Some(1.0 / BigDecimal(ndv))
    -    } else {
    +      if (colStat.histogram.isEmpty) {
    +        // returns 1/ndv if there is no histogram
    +        Some(1.0 / BigDecimal(ndv))
    +      } else {
    +        // We compute filter selectivity using Histogram information.
    +        // Here we traverse histogram bins to locate the range of bins the 
literal values falls
    +        // into.  For skewed distribution, a literal value can occupy 
multiple bins.
    +        val hgmBins = colStat.histogram.get.bins
    +        val datum = EstimationUtils.toDecimal(literal.value, 
literal.dataType).toDouble
    --- End diff --
    
    yes, I'll refactor this part.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to