Github user liufengdb commented on a diff in the pull request:
https://github.com/apache/spark/pull/20174#discussion_r160042482
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -245,11 +252,15 @@ case class HashAggregateExec(
| $doAggFuncName();
| $aggTime.add((System.nanoTime() - $beforeAgg) / 1000000);
|
- | // output the result
- | ${genResult.trim}
+ | if (!$hasInput && ${resultVars.isEmpty}) {
--- End diff --
I think it hurts the code readability if the code for the two cases are
defined separately. For the regular case, the generated code will look like
`if (false && !hasInput) ... else ...`. This pattern should be optimized easily
during jit, so we don't need to worry about the performance too much.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]