Github user juliuszsompolski commented on a diff in the pull request:
https://github.com/apache/spark/pull/19324#discussion_r140528662
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -599,10 +621,14 @@ case class HashAggregateExec(
}
} else ""
}
+ ctx.addExtraCode(generateGenerateCode())
+ val doAgg = ctx.freshName("doAggregateWithKeys")
+ val peakMemory = metricTerm(ctx, "peakMemory")
+ val spillSize = metricTerm(ctx, "spillSize")
+ val avgHashProbe = metricTerm(ctx, "avgHashProbe")
val doAggFuncName = ctx.addNewFunction(doAgg,
s"""
- ${generateGenerateCode}
--- End diff --
this is a tangent fix: this generated code for the hash map was
piggy-backed here together with the `doAggregateWithKeys` function, and it
could become inaccessible from the top function if the function gets generated
in a nested class (after https://github.com/apache/spark/pull/18075)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]