Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22105#discussion_r210350203
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
---
@@ -140,8 +140,24 @@ private int copyByteBuf(ByteBuf buf,
WritableByteChannel target) throws IOExcept
// SPARK-24578: cap the sub-region's size of returned nio buffer to
improve the performance
// for the case that the passed-in buffer has too many components.
int length = Math.min(buf.readableBytes(), NIO_BUFFER_LIMIT);
--- End diff --
Out of my curiosity, how do we come out with number of NIO_BUFFER_LIMIT
256KB?
In Hadoop, they are using
[8KB](https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L3234)
For most OSes, in the `write(ByteBuffer[])` API in `sun.nio.ch.IOUtil`, it
goes one buffer at a time, and gets a temporary direct buffer from the
`BufferCache`, up to a limit of `IOUtil#IOV_MAX` which is 1KB.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]