Swinky commented on a change in pull request #34062:
URL: https://github.com/apache/spark/pull/34062#discussion_r714259017



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/PartitionPruning.scala
##########
@@ -201,26 +201,28 @@ object PartitionPruning extends Rule[LogicalPlan] with 
PredicateHelper with Join
   }
 
   /**
-   * Returns whether an expression is likely to be selective
+   * Returns whether an expression is likely to be selective. If the filtering 
predicate is on join
+   * key then partition filter can be inferred statically in optimization 
phase, hence return false.
    */
-  private def isLikelySelective(e: Expression): Boolean = e match {
-    case Not(expr) => isLikelySelective(expr)
-    case And(l, r) => isLikelySelective(l) || isLikelySelective(r)
-    case Or(l, r) => isLikelySelective(l) && isLikelySelective(r)
-    case _: StringRegexExpression => true
-    case _: BinaryComparison => true
-    case _: In | _: InSet => true
-    case _: StringPredicate => true
-    case _: MultiLikeBase => true
+  private def isLikelySelective(e: Expression, joinKey: Expression): Boolean = 
e match {
+    case Not(expr) => isLikelySelective(expr, joinKey)
+    case And(l, r) => isLikelySelective(l, joinKey) || isLikelySelective(r, 
joinKey)
+    case Or(l, r) => isLikelySelective(l, joinKey) && isLikelySelective(r, 
joinKey)
+    case expr: StringRegexExpression => true && 
!expr.references.subsetOf(joinKey.references)
+    case expr: BinaryComparison => true && 
!expr.references.subsetOf(joinKey.references)
+    case expr: In => true && !expr.references.subsetOf(joinKey.references)
+    case expr: InSet => true && !expr.references.subsetOf(joinKey.references)
+    case expr: StringPredicate => true && 
!expr.references.subsetOf(joinKey.references)
+    case expr: MultiLikeBase => true && 
!expr.references.subsetOf(joinKey.references)

Review comment:
       @viirya so is it safe to say predicates defined in this method are the 
supported ones? 
org.apache.spark.sql.execution.datasources.DataSourceStrategy#translateLeafNodeFilter
   thanks for pointing out, I can just make y change for supported filters.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to