[
https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raghu Angadi updated HADOOP-3164:
---------------------------------
Attachment: HADOOP-3164.patch
This patch does not make the socket blocking. Is the following hack to check
for "EAGAIN" acceptable? The attached patch passes all the tests on Linux.
{code}
[...] try {
nTransfered = (int) inChannel.transferTo(startPos, len, outChannel);
} catch (IOException e) {
/* at least jdk1.6.0 on Linux seems to throw IOException
* when the socket is full. Hopefully near future verisions will
* handle EAGAIN better. For now look for a specific string in for
* the message for the exception.
*/
if (e.getMessage().startsWith("Resource temporarily unavailable")) {
out.waitForWritable();
continue;
} else {
throw e;
}
}
{code}
The IOException message could be different on other systems. For now, we could
use transferTo() only on the systems where we know the text of the message. I
think this issue will be fixed in next versions of JRE.
> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
> Key: HADOOP-3164
> URL: https://issues.apache.org/jira/browse/HADOOP-3164
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Attachments: HADOOP-3164.patch, HADOOP-3614.patch
>
>
> HADOOP-2312 talks about using FileChannel's
> [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
> and
> [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
> in DataNode.
> At the time DataNode neither used NIO sockets nor wrote large chunks of
> contiguous block data to socket. Hadoop 0.17 does both when data is seved to
> clients (and other datanodes). I am planning to try using transferTo() in the
> trunk. This might reduce DataNode's cpu by another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.