HyukjinKwon commented on a change in pull request #28089:
[SPARK-30921][PySpark] Predicates on python udf should not be pushdown through
Aggregate
URL: https://github.com/apache/spark/pull/28089#discussion_r402858079
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
##########
@@ -1188,9 +1188,10 @@ object PushPredicateThroughNonJoin extends
Rule[LogicalPlan] with PredicateHelpe
def getAliasMap(plan: Aggregate): AttributeMap[Expression] = {
// Find all the aliased expressions in the aggregate list that don't
include any actual
- // AggregateExpression, and create a map from the alias to the expression
+ // AggregateExpression or PythonUDF, and create a map from the alias to
the expression
val aliasMap = plan.aggregateExpressions.collect {
- case a: Alias if
a.child.find(_.isInstanceOf[AggregateExpression]).isEmpty =>
+ case a: Alias if a.child.find(e => e.isInstanceOf[AggregateExpression] ||
+ e.isInstanceOf[PythonUDF]).isEmpty =>
Review comment:
What about we use `PythonUDF.isGroupedAggPandasUDF(e)` instead? This is
guaranteed in
https://github.com/apache/spark/blob/5866bc77d7703939e93c00f22ea32981d4ebdc6c/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L213-L215
If we need another Python UDF that supports aggregation, we should touch
there anyway.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]