Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220128279
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1234,6 +1237,59 @@ object PushPredicateThroughJoin extends
Rule[LogicalPlan] with PredicateHelper {
}
}
+/**
+ * Correctly handle PythonUDF which need access both side of join side by
changing the new join
+ * type to Cross.
+ */
+object HandlePythonUDFInJoinCondition extends Rule[LogicalPlan] with
PredicateHelper {
+ override def apply(plan: LogicalPlan): LogicalPlan =
plan.resolveOperatorsUp {
+ case j @ Join(_, _, joinType, condition)
+ if condition.map(splitConjunctivePredicates).getOrElse(Nil).exists(
+ _.collectFirst { case udf: PythonUDF => udf }.isDefined) =>
+ if (!joinType.isInstanceOf[InnerLike] && joinType != LeftSemi) {
+ // The current strategy only support InnerLike and LeftSemi join
because other type
+ // can not simply be resolved by adding a Cross join. If we pass
the plan here, it'll
+ // still get a an invalid PythonUDF RuntimeException with message
`requires attributes
+ // from more than one child`, we throw firstly here for better
readable information.
+ throw new AnalysisException("Using PythonUDF in join condition of
join type" +
+ s" $joinType is not supported.")
+ }
+ if (SQLConf.get.crossJoinEnabled) {
--- End diff --
Maybe the currently `CheckCartesianProducts` could not reuse because it
only match the case of `Join(left, right, Inner | LeftOuter | RightOuter |
FullOuter, _)`, while moving new batch before it, here will got a CrossJoin.
If it is permitted adding the python udf check log in
`CheckCartesianProducts` I think your proposal can be achieved, but maybe the
current logic is better than above method? Because we can log the detail why
we need cross join here. WDYT?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]