[
https://issues.apache.org/jira/browse/HDFS-3342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266201#comment-13266201
]
Todd Lipcon commented on HDFS-3342:
-----------------------------------
This error is easy to reproduce. Put a medium size file (I used 4MB) in HDFS,
and then run something like:
{code}
$ hadoop fs -cat vmlinuz | perl -e 'while (1) { sleep 5; $x += 5; print "$x\n";
}'
{code}
After 8 minutes (the default write timeout) you'll see a SocketTimeoutException
in the DN logs.
> SocketTimeoutException in BlockSender.sendChunks could have a better error
> message
> ----------------------------------------------------------------------------------
>
> Key: HDFS-3342
> URL: https://issues.apache.org/jira/browse/HDFS-3342
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: data-node
> Affects Versions: 2.0.0
> Reporter: Todd Lipcon
> Priority: Minor
>
> Currently, if a client connects to a DN and begins to read a block, but then
> stops calling read() for a long period of time, the DN will log a
> SocketTimeoutException "480000 millis timeout while waiting for channel to be
> ready for write." This is because there is no "keepalive" functionality of
> any kind. At a minimum, we should improve this error message to be an INFO
> level log which just says that the client likely stopped reading, so
> disconnecting it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira