[
https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585370#action_12585370
]
Raghu Angadi commented on HADOOP-3164:
--------------------------------------
bummer... transferTo does not handle non-blocking sockets well. I have a patch
that works fine when the socket channel is in blocking mode. JavaDoc clearly
indicates it can handle non-blocking sockets!. One hack is to interprete
IOException and look for "Resource temporarily" at the beginning of the message.
I get the following with non-blocking sockets :
{noformat}
java.io.IOException: Resource temporarily unavailable
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:418)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:519)
at org.apache.hadoop.dfs.DataNode.transferToFully(DataNode.java:1500)
at org.apache.hadoop.dfs.DataNode.access$900(DataNode.java:84)
at
org.apache.hadoop.dfs.DataNode$BlockSender.sendChunks(DataNode.java:1807)
at
org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:1880)
at
org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock(DataNode.java:1032)
at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:961)
at java.lang.Thread.run(Thread.java:619)
{noformat}
Looks like all it needs to do is just to look for EAGAIN from the system call.
> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
> Key: HADOOP-3164
> URL: https://issues.apache.org/jira/browse/HADOOP-3164
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
>
> HADOOP-2312 talks about using FileChannel's
> [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
> and
> [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
> in DataNode.
> At the time DataNode neither used NIO sockets nor wrote large chunks of
> contiguous block data to socket. Hadoop 0.17 does both when data is seved to
> clients (and other datanodes). I am planning to try using transferTo() in the
> trunk. This might reduce DataNode's cpu by another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.