zhenlineo commented on code in PR #40581:
URL: https://github.com/apache/spark/pull/40581#discussion_r1156521094
##########
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##########
@@ -846,7 +885,28 @@ class SparkConnectPlanner(val session: SparkSession) {
private def transformFilter(rel: proto.Filter): LogicalPlan = {
assert(rel.hasInput)
val baseRel = transformRelation(rel.getInput)
- logical.Filter(condition = transformExpression(rel.getCondition), child =
baseRel)
+ val cond = rel.getCondition
+ cond.getExprTypeCase match {
+ case proto.Expression.ExprTypeCase.COMMON_INLINE_USER_DEFINED_FUNCTION
Review Comment:
Looking at the proto, the minimal chance needed is:
1. Always pass the Scala UDF inputs and return types and
2. Leave the CommonInlineUDF arguments as provided when available or not
provided e.g. for typed filter function and map partitions function.
I feel this approach a. dose not make any proto change. b. does not add any
extra data. c. can tell the typed filter udf clearly from others.
`col("*")` would do a similar result, but I would prefer avoid a magical
marker if possible.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]