Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-204207886
Hi @vanzin , the memory copy place is given out by @zsxwing , the call
stack is as follows:
```
at java.nio.Bits.copyFromArray(Bits.java:754)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:371)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:342)
at sun.nio.ch.IOUtil.write(IOUtil.java:60)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:466)
- locked <0x00007f8a8a28d400> (a java.lang.Object)
at
org.apache.spark.network.protocol.MessageWithHeader.copyByteBuf(MessageWithHeader.java:131)
at
org.apache.spark.network.protocol.MessageWithHeader.transferTo(MessageWithHeader.java:114)
```
The whole buffer copy is in line
http://www.grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7u40-b43/sun/nio/ch/IOUtil.java#60,
but the buffer cannot be totally written if its side more than the available
underlying buffer. Which is in line
http://www.grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7u40-b43/sun/nio/ch/IOUtil.java#65.
So each time we will make a copy of the input `ByteBuf`, and write only a part
of it if the input size is big relatively. This results in multiply copies of
the input `ByteBuf` that is not necessary.
The method of handling the issue in this PR is the same as that in Hadoop,
please refer to
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2957
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]