Github user ron8hu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17148#discussion_r104557770
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
 ---
    @@ -90,32 +95,43 @@ case class FilterEstimation(plan: Filter, catalystConf: 
CatalystConf) extends Lo
       def calculateFilterSelectivity(condition: Expression, update: Boolean = 
true): Option[Double] = {
         condition match {
           case And(cond1, cond2) =>
    -        // For ease of debugging, we compute percent1 and percent2 in 2 
statements.
    -        val percent1 = calculateFilterSelectivity(cond1, update)
    -        val percent2 = calculateFilterSelectivity(cond2, update)
    -        (percent1, percent2) match {
    -          case (Some(p1), Some(p2)) => Some(p1 * p2)
    -          case (Some(p1), None) => Some(p1)
    -          case (None, Some(p2)) => Some(p2)
    -          case (None, None) => None
    -        }
    +        val percent1 = calculateFilterSelectivity(cond1, 
update).getOrElse(1.0)
    +        val percent2 = calculateFilterSelectivity(cond2, 
update).getOrElse(1.0)
    +        Some(percent1 * percent2)
     
           case Or(cond1, cond2) =>
    -        // For ease of debugging, we compute percent1 and percent2 in 2 
statements.
    -        val percent1 = calculateFilterSelectivity(cond1, update = false)
    -        val percent2 = calculateFilterSelectivity(cond2, update = false)
    -        (percent1, percent2) match {
    -          case (Some(p1), Some(p2)) => Some(math.min(1.0, p1 + p2 - (p1 * 
p2)))
    -          case (Some(p1), None) => Some(1.0)
    -          case (None, Some(p2)) => Some(1.0)
    -          case (None, None) => None
    +        val percent1 = calculateFilterSelectivity(cond1, update = 
false).getOrElse(1.0)
    +        val percent2 = calculateFilterSelectivity(cond2, update = 
false).getOrElse(1.0)
    +        Some(percent1 + percent2 - (percent1 * percent2))
    +
    +      // For AND and OR conditions, we will estimate conservatively if one 
of two
    +      // components is not supported, e.g. suppose c1 is not supported,
    +      // then p(And(c1, c2)) = p(c2), and p(Or(c1, c2)) = 1.0.
    +      // But once they are wrapped in NOT condition, then after 1 - p, it 
becomes
    +      // under-estimation. So in these cases, we consider them as 
unsupported.
    +      case Not(And(cond1, cond2)) =>
    --- End diff --
    
    The current code is fine.  If we just call calculateSingleCondition for 
"case Not(And(cond1, cond2))", then it is too restrictive.  The current code 
computes selectivity for only when we can get selectivity for both conditions. 
If we cannot get selectivity for either one or both, then we just return None.  
I think it is a clean solution.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to