Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10073#discussion_r46374041
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -712,6 +711,51 @@ object PushPredicateThroughAggregate extends
Rule[LogicalPlan] with PredicateHel
}
/**
+ * Reorder the joins so that the bottom ones have at least one condition.
+ */
+object ReorderJoin extends Rule[LogicalPlan] with PredicateHelper {
+
+ /**
+ * Join a list of plans together and push down the conditions into them.
+ *
+ * The joined plan are picked from left to right, prefer those has at
least one join condition.
+ *
+ * @param input a list of LogicalPlans to join.
+ * @param conditions a list of condition for join.
+ */
+ def createOrderedJoin(input: Seq[LogicalPlan], conditions:
Seq[Expression]): LogicalPlan = {
+ assert(input.size >= 2)
+ if (input.size == 2) {
+ Join(input(0), input(1), Inner, conditions.reduceLeftOption(And))
+ } else {
+ val left :: rest = input.toList
+ // find out the first join that have at least one join condition
+ val conditionalJoin = rest.find { plan =>
+ val refs = left.outputSet ++ plan.outputSet
+ conditions.filterNot(canEvaluate(_,
left)).filterNot(canEvaluate(_, plan))
+ .exists(_.references.subsetOf(refs))
--- End diff --
the implementation of `canEvaluate`:
```
protected def canEvaluate(expr: Expression, plan: LogicalPlan): Boolean =
expr.references.subsetOf(plan.outputSet)
```
So I think `conditions.exists(_.references.subsetOf(refs))` is enough here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]