mridulm commented on code in PR #38064:
URL: https://github.com/apache/spark/pull/38064#discussion_r1001241694


##########
core/src/main/scala/org/apache/spark/executor/Executor.scala:
##########
@@ -659,9 +659,10 @@ private[spark] class Executor(
         val accumUpdates = task.collectAccumulatorUpdates()
         val metricPeaks = metricsPoller.getTaskMetricPeaks(taskId)
         // TODO: do not serialize value twice
-        val directResult = new DirectTaskResult(valueBytes, accumUpdates, 
metricPeaks)
-        val serializedDirectResult = ser.serialize(directResult)
-        val resultSize = serializedDirectResult.limit()
+        val directResult = new DirectTaskResult(valueByteBuffer, accumUpdates, 
metricPeaks)
+        val serializedDirectResult = 
SerializerHelper.serializeToChunkedBuffer(ser, directResult,
+          valueByteBuffer.size)

Review Comment:
   As I mentioned above, get reasonable upper bound - we can always improve in 
future.
   The main reason why I suggested improving for accumulators particularly is 
because typically rest of DirectTaskResult in much smaller than result value - 
so a reasonable overestimation there might help minimize need for two large 
buffers when one might do.
   But this is mostly a nit - we can add a comment to the code, if we are 
unsure of the heurestics, to improve it for future.
   
   The main reason for suggesting this is to reduce wastage at driver.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to