[ 
https://issues.apache.org/jira/browse/HDFS-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284645#comment-14284645
 ] 

Colin Patrick McCabe commented on HDFS-7005:
--------------------------------------------

[~zsl2007], it appears that the DataNode is setting both a write and a read 
timeout on its sockets, but the DFSClient is only setting a read timeout.  If 
you want to file another JIRA to add a write timeout to DFSClient sockets, that 
might be a good idea.

> DFS input streams do not timeout
> --------------------------------
>
>                 Key: HDFS-7005
>                 URL: https://issues.apache.org/jira/browse/HDFS-7005
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 3.0.0, 2.5.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>             Fix For: 2.6.0
>
>         Attachments: HDFS-7005.patch
>
>
> Input streams lost their timeout.  The problem appears to be 
> {{DFSClient#newConnectedPeer}} does not set the read timeout.  During a 
> temporary network interruption the server will close the socket, unbeknownst 
> to the client host, which blocks on a read forever.
> The results are dire.  Services such as the RM, JHS, NMs, oozie servers, etc 
> all need to be restarted to recover - unless you want to wait many hours for 
> the tcp stack keepalive to detect the broken socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to