Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r214962083
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,9 +1208,26 @@ object PushPredicateThroughJoin extends
Rule[LogicalPlan] with PredicateHelper {
reduceLeftOption(And).map(Filter(_, left)).getOrElse(left)
val newRight = rightJoinConditions.
reduceLeftOption(And).map(Filter(_, right)).getOrElse(right)
- val newJoinCond = commonJoinCondition.reduceLeftOption(And)
-
- Join(newLeft, newRight, joinType, newJoinCond)
+ val (newJoinConditions, others) =
+ commonJoinCondition.partition(canEvaluateWithinJoin)
+ val newJoinCond = newJoinConditions.reduceLeftOption(And)
+ // if condition expression is unevaluable, it will be removed
from
+ // the new join conditions, if all conditions is unevaluable, we
should
+ // change the join type to CrossJoin.
+ val newJoinType =
+ if (commonJoinCondition.nonEmpty && newJoinCond.isEmpty) {
+ logWarning(s"The whole
commonJoinCondition:$commonJoinCondition of the join " +
+ s"plan:\n $j is unevaluable, it will be ignored and the
join plan will be " +
--- End diff --
@mgaido91
> This is, indeed, arguable. I think that letting the user choose is a good
idea. If the users runs the query and gets an AnalysisException because he/she
is trying to perform a cartesian product, he/she can decide: ok, I am doing a
wrong thing, let's change it; or he/she can say, well, one of my 2 tables
involved contains 10 rows, not a big deal, I want to perform it nonetheless,
let's set spark.sql.crossJoin.enabled=true and run it.
Sounds reasonable ..
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]