[
https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591472#action_12591472
]
Raghu Angadi commented on HADOOP-3164:
--------------------------------------
bq. With transferTo(), DataNode does not actually allocate the buffer. In that
sense, we could increase the size in DataNode without affecting client
buffering (apart from slight increase in buffer for checksum).
I mean, DataNode could use some thing like max(64KB, configured buffer size)
when transfer to is enabled. 64KB implies 512 bytes of checksum data. So client
needs to read 512 bytes of checksum before it reads actual data.
> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
> Key: HADOOP-3164
> URL: https://issues.apache.org/jira/browse/HADOOP-3164
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.18.0
>
> Attachments: HADOOP-3164.patch, HADOOP-3164.patch, HADOOP-3164.patch,
> HADOOP-3164.patch, HADOOP-3164.patch
>
>
> HADOOP-2312 talks about using FileChannel's
> [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
> and
> [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
> in DataNode.
> At the time DataNode neither used NIO sockets nor wrote large chunks of
> contiguous block data to socket. Hadoop 0.17 does both when data is seved to
> clients (and other datanodes). I am planning to try using transferTo() in the
> trunk. This might reduce DataNode's cpu by another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.