mridulm commented on code in PR #38064:
URL: https://github.com/apache/spark/pull/38064#discussion_r1001237268


##########
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala:
##########
@@ -207,6 +267,18 @@ private[spark] object ChunkedByteBuffer {
     }
     out.toChunkedByteBuffer
   }
+
+  /**
+   * Try to estimate appropriate chunk size so that it's not too large(waste 
memory) or too
+   * small(too many segments)
+   */
+  def estimateBufferChunkSize(estimatedSize: Long = -1): Int = {
+    if (estimatedSize < 0) {
+      CHUNK_BUFFER_SIZE
+    } else {
+      Math.max(Math.min(estimatedSize >> 3, CHUNK_BUFFER_SIZE).toInt, 
MINIMUM_CHUNK_BUFFER_SIZE)

Review Comment:
   I would focus more on reasonable estimation and preventing orders of 
magnitude wastage.
   The cost between a 8k and 10k buffer, in this context, is negligible if not 
zero.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to