maropu commented on a change in pull request #26420: [SPARK-27986][SQL] Support 
ANSI SQL filter predicate for aggregate expression.
URL: https://github.com/apache/spark/pull/26420#discussion_r348479199
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
 ##########
 @@ -117,6 +117,21 @@ case class HashAggregateExec(
         // so return an empty iterator.
         Iterator.empty
       } else {
+        val filterPredicates = new mutable.HashMap[Int, GenPredicate]
+        aggregateExpressions.zipWithIndex.foreach{
+          case (ae: AggregateExpression, i) =>
+            ae.mode match {
+              case Partial | Complete =>
+                ae.filter.foreach { filterExpr =>
+                  val filterAttrs = filterExpr.references.toSeq
+                  val predicate = newPredicate(filterExpr, child.output ++ 
filterAttrs)
+                  predicate.initialize(partIndex)
+                  filterPredicates(i) = predicate
+                }
 
 Review comment:
   Fixed in https://github.com/apache/spark/pull/26604. Can you brush up this 
pr based on the commit?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to