[ 
https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590671#action_12590671
 ] 

Raghu Angadi commented on HADOOP-3164:
--------------------------------------

- #1 : {{useChannelForTransferTo}} can be removed. Since there in a bug in 
Linux, I was conservative about untested OSes. But we need not be.
- #2 always calling waitForWritable() before transferTo() will work (for 
practical purposes). Extra 4 system calls mostly won't be noticeable. Main 
thing I was wondering is that we might still hit the Linux bug in rare cases 
since there is no _promise_ that sendfile() will not return EAGAIN after 
select() returns socket is writable. I will leave a comment to this effect. 
-- How about IOUtils rather than SocketOutputStream for transferToFully()? Its 
more like readFully().
- #3. sendblock() needs a DataOutputStream to write checksum and some more 
stuff. If it is not passed then sendBlock needs to create one. I think current 
interface is ok (it is DataNode internal).
-- {{this.out}} and {{out}} existed before. But we can fix it.

I will update the patch.

> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
>                 Key: HADOOP-3164
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3164
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3164.patch, HADOOP-3164.patch, HADOOP-3164.patch, 
> HADOOP-3164.patch
>
>
> HADOOP-2312 talks about using FileChannel's 
> [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
>  and 
> [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
>  in DataNode. 
> At the time DataNode neither used NIO sockets nor wrote large chunks of 
> contiguous block data to socket. Hadoop 0.17 does both when data is seved to 
> clients (and other datanodes). I am planning to try using transferTo() in the 
> trunk. This might reduce DataNode's cpu by another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to