[
https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591350#action_12591350
]
Doug Cutting commented on HADOOP-3164:
--------------------------------------
The point of the patch is to bypass the buffer. So making the buffer big
doesn't improve the utility of transferTo(), rather just hides the fact that
the implemention of transferTo() on the Mac sucks. It thus makes no sense to
gate the use of transferTo() on the bufferSize.
Also, increasing the default buffer size can have a significant impact on
memory usage, since some applications open lots of streams, and each open
stream frequently has several buffers. When we find that a particular
operation benefits from a larger buffer, it is usually best to increase just
its buffer size rather than the default buffer size for all Hadoop i/o streams.
I think option (2) still looks best.
> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
> Key: HADOOP-3164
> URL: https://issues.apache.org/jira/browse/HADOOP-3164
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.18.0
>
> Attachments: HADOOP-3164.patch, HADOOP-3164.patch, HADOOP-3164.patch,
> HADOOP-3164.patch, HADOOP-3164.patch
>
>
> HADOOP-2312 talks about using FileChannel's
> [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
> and
> [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
> in DataNode.
> At the time DataNode neither used NIO sockets nor wrote large chunks of
> contiguous block data to socket. Hadoop 0.17 does both when data is seved to
> clients (and other datanodes). I am planning to try using transferTo() in the
> trunk. This might reduce DataNode's cpu by another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.