Github user xuanyuanking commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22326#discussion_r220576188
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala 
---
    @@ -152,3 +153,53 @@ object EliminateOuterJoin extends Rule[LogicalPlan] 
with PredicateHelper {
           if (j.joinType == newJoinType) f else Filter(condition, 
j.copy(joinType = newJoinType))
       }
     }
    +
    +/**
    + * PythonUDF in join condition can not be evaluated, this rule will detect 
the PythonUDF
    + * and pull them out from join condition. For python udf accessing 
attributes from only one side,
    + * they would be pushed down by operation push down rules. If not(e.g. 
user disables filter push
    + * down rules), we need to pull them out in this rule too.
    + */
    +object PullOutPythonUDFInJoinCondition extends Rule[LogicalPlan] with 
PredicateHelper {
    +  def hasPythonUDF(expression: Expression): Boolean = {
    +    expression.collectFirst { case udf: PythonUDF => udf }.isDefined
    +  }
    +
    +  override def apply(plan: LogicalPlan): LogicalPlan = plan transformUp {
    +    case j @ Join(_, _, joinType, condition)
    +        if condition.isDefined && hasPythonUDF(condition.get) =>
    +      if (!joinType.isInstanceOf[InnerLike] && joinType != LeftSemi) {
    +        // The current strategy only support InnerLike and LeftSemi join 
because for other type,
    +        // it breaks SQL semantic if we run the join condition as a filter 
after join. If we pass
    +        // the plan here, it'll still get a an invalid PythonUDF 
RuntimeException with message
    +        // `requires attributes from more than one child`, we throw 
firstly here for better
    +        // readable information.
    +        throw new AnalysisException("Using PythonUDF in join condition of 
join type" +
    +          s" $joinType is not supported.")
    +      }
    +      // If condition expression contains python udf, it will be moved out 
from
    +      // the new join conditions. If join condition has python udf only, 
it will be turned
    +      // to cross join and the crossJoinEnable will be checked in 
CheckCartesianProducts.
    +      val (udf, rest) =
    +        
condition.map(splitConjunctivePredicates).get.partition(hasPythonUDF)
    +      val newCondition = if (rest.isEmpty) {
    +        logWarning(s"The join condition:$condition of the join plan 
contains " +
    +          "PythonUDF only, it will be moved out and the join plan will be 
turned to cross " +
    +          s"join. This plan shows below:\n $j")
    --- End diff --
    
    Got it, done in d2739af.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to