cloud-fan commented on a change in pull request #32786:
URL: https://github.com/apache/spark/pull/32786#discussion_r647073606



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/AggregatingAccumulator.scala
##########
@@ -148,6 +148,14 @@ class AggregatingAccumulator private(
   }
 
   override def merge(other: AccumulatorV2[InternalRow, InternalRow]): Unit = 
withSQLConf(()) {
+    mergeInternal(other)
+  }
+
+  /**
+   * Actual code for merge. This function is not intended to be called 
directly from executor side.
+   * One exception is `addState`.
+   */
+  private def mergeInternal(other: AccumulatorV2[InternalRow, InternalRow]): 
Unit = {

Review comment:
       > merge with withSQLConf seems to separate logic for driver and executor.
   
   It's more like a sanity check to make sure `update/merge` is only called on 
the expected side (driver or executor). Since we now have a reason to call 
`merge` on the executor side, let's document it clearly and allow it. To keep 
`update` untouched, we can create an overload of `withSQLConf` for `merge`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to