BryanCutler commented on a change in pull request #28089: 
[SPARK-30921][PySpark] Predicates on python udf should not be pushdown through 
Aggregate
URL: https://github.com/apache/spark/pull/28089#discussion_r402640937
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 ##########
 @@ -1190,7 +1190,8 @@ object PushPredicateThroughNonJoin extends 
Rule[LogicalPlan] with PredicateHelpe
     // Find all the aliased expressions in the aggregate list that don't 
include any actual
     // AggregateExpression, and create a map from the alias to the expression
     val aliasMap = plan.aggregateExpressions.collect {
-      case a: Alias if 
a.child.find(_.isInstanceOf[AggregateExpression]).isEmpty =>
+      case a: Alias if 
a.child.find(_.isInstanceOf[AggregateExpression]).isEmpty &&
+          a.child.find(_.isInstanceOf[PythonUDF]).isEmpty =>
 
 Review comment:
   could you check both instances in a single call to `find`? like 
`a.child.find(e => e.isInstanceOf[AggregateExpression] || 
e.isInstanceOf[PythonUDF]).isEmpty`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to