sarutak commented on a change in pull request #32786:
URL: https://github.com/apache/spark/pull/32786#discussion_r647007683



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/AggregatingAccumulator.scala
##########
@@ -148,6 +148,14 @@ class AggregatingAccumulator private(
   }
 
   override def merge(other: AccumulatorV2[InternalRow, InternalRow]): Unit = 
withSQLConf(()) {
+    mergeInternal(other)
+  }
+
+  /**
+   * Actual code for merge. This function is not intended to be called 
directly from executor side.
+   * One exception is `addState`.
+   */
+  private def mergeInternal(other: AccumulatorV2[InternalRow, InternalRow]): 
Unit = {

Review comment:
       According to the comment of `withSQLConf`, `merge` is usually intended 
to be called from driver side and not from executor side. In fact, `merge` with 
`withSQLConf` seems to separate logic for driver and executor. So, 
`mergeInternal` is for exceptional cases where executor really need to call the 
merge logic.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to