rednaxelafx commented on a change in pull request #20965: [SPARK-21870][SQL] 
Split aggregation code into small functions
URL: https://github.com/apache/spark/pull/20965#discussion_r316517404
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
 ##########
 @@ -267,29 +302,81 @@ case class HashAggregateExec(
           
e.aggregateFunction.asInstanceOf[DeclarativeAggregate].mergeExpressions
       }
     }
-    ctx.currentVars = bufVars ++ input
-    val boundUpdateExpr = bindReferences(updateExpr, inputAttrs)
-    val subExprs = 
ctx.subexpressionEliminationForWholeStageCodegen(boundUpdateExpr)
-    val effectiveCodes = subExprs.codes.mkString("\n")
-    val aggVals = ctx.withSubExprEliminationExprs(subExprs.states) {
-      boundUpdateExpr.map(_.genCode(ctx))
-    }
-    // aggregate buffer should be updated atomic
-    val updates = aggVals.zipWithIndex.map { case (ev, i) =>
+
+    if (!conf.codegenSplitAggregateFunc) {
+      ctx.currentVars = bufVars ++ input
+      val boundUpdateExpr = updateExpr.map(BindReferences.bindReference(_, 
inputAttrs))
+      val subExprs = 
ctx.subexpressionEliminationForWholeStageCodegen(boundUpdateExpr)
+      val effectiveCodes = subExprs.codes.mkString("\n")
+      val aggVals = ctx.withSubExprEliminationExprs(subExprs.states) {
+        boundUpdateExpr.map(_.genCode(ctx))
+      }
+      // aggregate buffer should be updated atomic
+      val updates = aggVals.zipWithIndex.map { case (ev, i) =>
+        s"""
+           | ${bufVars(i).isNull} = ${ev.isNull};
+           | ${bufVars(i).value} = ${ev.value};
+       """.stripMargin
+      }
+      s"""
+         | // do aggregate
+         | // common sub-expressions
+         | $effectiveCodes
+         | // evaluate aggregate function
+         | ${evaluateVariables(aggVals)}
+         | // update aggregation buffer
+         | ${updates.mkString("\n").trim}
+     """.stripMargin
+    } else {
+      // We need to copy the aggregation buffer to local variables first 
because each aggregate
 
 Review comment:
   I think I'm starting to understand ... I had a different basic assumption 
from your splitting logic.
   
   I started realizing that when reading your sample generated code in the PR 
description (BTW that sample code should be updated to match what it currently 
generates when you rebase this PR to the latest master)
   ```java
   /* 108 */       // copy aggregation buffer to the local
   /* 109 */       boolean agg_localBufIsNull = agg_bufIsNull;
   /* 110 */       long agg_localBufValue = agg_bufValue;
   /* 111 */       boolean agg_localBufIsNull1 = agg_bufIsNull1;
   /* 112 */       double agg_localBufValue1 = agg_bufValue1;
   /* 113 */       boolean agg_localBufIsNull2 = agg_bufIsNull2;
   /* 114 */       long agg_localBufValue2 = agg_bufValue2;
   /* 115 */       // common sub-expressions
   /* 116 */
   /* 117 */       // process aggregate functions to update aggregation buffer
   /* 118 */       agg_doAggregateVal_coalesce(agg_localBufIsNull, 
agg_localBufValue, inputadapter_value, inputadapter_isNull);
   /* 119 */       agg_doAggregateVal_add(agg_localBufValue1, 
inputadapter_isNull1, inputadapter_value1, agg_localBufIsNull1);
   /* 120 */       agg_doAggregateVal_add1(inputadapter_isNull2, 
inputadapter_value2, agg_localBufIsNull2, agg_localBufValue2);
   ```
   I thought for `SUM(a), AVG(a)` I was going to see `agg_doAggregateVal_sum` 
and `agg_doAggregateVal_avg`, but instead what I'm seeing here is a more 
fine-grained splitting for each update expression. So instead we get:
   - `agg_doAggregateVal_coalesce` for `sum(a)`
   - `agg_doAggregateVal_add` and `agg_doAggregateVal_add1` for `avg(a)`
   
   My previous comment in this thread only applies when the splitting boundary 
is on per-aggregate-expression granularity, instead of on 
per-update-expression-in-aggregate-function granularity.
   
   `kurtosis()` is pretty much the largest declarative aggregate function in 
Spark SQL right now. I don't think a single `kurtosis()` would go over 8000 
bytes worth of bytecode, so maybe a per-aggregate-expression granularity would 
make more sense?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to