Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8652#discussion_r42446830
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
    @@ -268,6 +268,27 @@ private[sql] abstract class SparkStrategies extends 
QueryPlanner[SparkPlan] {
     
       object CartesianProduct extends Strategy {
         def apply(plan: LogicalPlan): Seq[SparkPlan] = plan match {
    +      // Not like the equal-join, BroadcastNestedLoopJoin doesn't support 
condition
    +      // for cartesian join, as in cartesian join, probably, the records 
satisfy the
    +      // condition, but exists in another partition of the large table, so 
we may not able
    +      // to eliminate the duplicates.
    +      case logical.Join(
    +        CanBroadcast(left), right, joinType @ (FullOuter | LeftOuter | 
RightOuter), None) =>
    +        execution.joins.BroadcastNestedLoopJoin(
    +          planLater(left), planLater(right), joins.BuildLeft, joinType, 
None) :: Nil
    --- End diff --
    
    Yes, I noticed that also, however, without these changes, the cases I added 
will transform into the operator `CartesianProduct` instead of 
`BroadcastNestedLoopJoin`, as the strategy `BroadcastNestedLoopJoin` is the 
rule after `CartesianProduct`.
    
    Besides, explicitly providing the rules for the optimization, probably will 
be helpful for people to understand how the logic behind.
    
    PS: The rule `BroadcastNestedLoopJoin` has to be the last gate, as it's 
supposed to handle all kinds of joins.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to