[
https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591095#action_12591095
]
Raghu Angadi commented on HADOOP-3164:
--------------------------------------
Thanks Sam. I am assuming you ran with large value (like 64 KB or 128KB) for
io.file.buffer.size. Given this, options I can think of are :
# have an internal config variable to turn on this feature, with default off.
# 1st option + with default on on Linux (and any other OS with positive
results) and off on the rest.
# No need to have this code.
# Always on.
I hope last two options are ruled out.
My preference is 2nd option. Every option has (obvious) pros and cons. One
isolated and well commented check for OS is not such a terrible thing (well,
may be it is). Hadoop already has those.. its not the first such check. It does
not mean we are in favor of such checks. Do I need to make a better case? Votes
are welcome.
> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
> Key: HADOOP-3164
> URL: https://issues.apache.org/jira/browse/HADOOP-3164
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.18.0
>
> Attachments: HADOOP-3164.patch, HADOOP-3164.patch, HADOOP-3164.patch,
> HADOOP-3164.patch, HADOOP-3164.patch
>
>
> HADOOP-2312 talks about using FileChannel's
> [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
> and
> [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
> in DataNode.
> At the time DataNode neither used NIO sockets nor wrote large chunks of
> contiguous block data to socket. Hadoop 0.17 does both when data is seved to
> clients (and other datanodes). I am planning to try using transferTo() in the
> trunk. This might reduce DataNode's cpu by another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.