cloud-fan commented on code in PR #39170:
URL: https://github.com/apache/spark/pull/39170#discussion_r1162843596


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/InjectRuntimeFilter.scala:
##########
@@ -146,7 +151,25 @@ object InjectRuntimeFilter extends Rule[LogicalPlan] with 
PredicateHelper with J
           predicateReference ++ condition.references,
           hasHitFilter = true,
           hasHitSelectiveFilter = hasHitSelectiveFilter || 
isLikelySelective(condition))
-      case _: LeafNode => hasHitSelectiveFilter
+      case ExtractEquiJoinKeys(joinType, _, _, _, _, left, right, hint) =>
+        // Runtime filters use one side of the [[Join]] to build a set of join 
key values and prune
+        // the other side of the [[Join]]. It's also OK to use a superset of 
the join key values to
+        // do the pruning. For inner [[Join]]s, one side of the [[Join]] 
always produces a superset
+        // of the join key values.
+        if (isLeftSideSuperset(joinType, left, filterCreationSideExp)) {
+          !hintToBroadcastLeft(hint) && !canBroadcastBySize(left, conf) &&

Review Comment:
   why do we check if it's broadcastable? We have a very conservative 
definition of `a selective filter over scan` which is sufficient to decide 
building runtime filter or not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to