liuzqt commented on code in PR #38064:
URL: https://github.com/apache/spark/pull/38064#discussion_r1001086967
##########
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala:
##########
@@ -207,6 +267,18 @@ private[spark] object ChunkedByteBuffer {
}
out.toChunkedByteBuffer
}
+
+ /**
+ * Try to estimate appropriate chunk size so that it's not too large(waste
memory) or too
+ * small(too many segments)
+ */
+ def estimateBufferChunkSize(estimatedSize: Long = -1): Int = {
+ if (estimatedSize < 0) {
+ CHUNK_BUFFER_SIZE
+ } else {
+ Math.max(Math.min(estimatedSize >> 3, CHUNK_BUFFER_SIZE).toInt,
MINIMUM_CHUNK_BUFFER_SIZE)
Review Comment:
The logic is explained in the [below
comments](https://github.com/apache/spark/pull/38064#discussion_r999794248),
the heuristic is somehow arbitrary but I think it should make some sense. Feel
free to leave any feedback, thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]