Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/10073#discussion_r46497217
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -133,6 +132,44 @@ object ExtractEquiJoinKeys extends Logging with
PredicateHelper {
}
/**
+ * A pattern that collects the filter and inner joins.
+ *
+ * Filter
+ * |
+ * inner Join
+ * / \ ----> (Seq(plan0, plan1, plan2),
conditions)
+ * Filter plan2
+ * |
+ * inner join
+ * / \
+ * plan0 plan1
+ */
+object ExtractFiltersAndInnerJoins extends PredicateHelper {
+
+ // flatten all inner joins, which are next to each other
+ def flattenJoin(plan: LogicalPlan): (Seq[LogicalPlan], Seq[Expression])
= plan match {
+ case Join(left, right, Inner, cond) =>
+ // only find the nested join on left, because we can only generate
the plan like that
+ val (plans, conditions) = flattenJoin(left)
+ (plans ++ Seq(right), conditions ++ cond.toSeq)
--- End diff --
conditions is already a list of Expression
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]