Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19193#discussion_r156060138
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ---
    @@ -1920,7 +1927,34 @@ class Analyzer(
     
           case p: LogicalPlan if !p.childrenResolved => p
     
    -      // Aggregate without Having clause.
    +      // Extract window expressions from aggregate functions. There might 
be an aggregate whose
    +      // aggregate function contains a window expression as a child, which 
we need to extract.
    +      // e.g., df.groupBy().agg(max(rank().over(window))
    +      case a @ Aggregate(groupingExprs, aggregateExprs, child)
    +        if containsAggregateFunctionWithWindowExpression(aggregateExprs) &&
    +           a.expressions.forall(_.resolved) =>
    +
    +        val windowExprAliases = new ArrayBuffer[NamedExpression]()
    +        val newAggregateExprs = aggregateExprs.map { expr =>
    +          expr.transform {
    --- End diff --
    
    The code below assumes that there are no window aggregates on top of a 
regular aggregate, and it will push the regular aggregate into the underlying 
window. An example of this:
    `df.groupBy(a).agg(max(rank().over(window1)), sum(sum(c)).over(window2))`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to