[ 
https://issues.apache.org/jira/browse/HDFS-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14126894#comment-14126894
 ] 

Hudson commented on HDFS-7005:
------------------------------

FAILURE: Integrated in Hadoop-Yarn-trunk #675 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/675/])
HDFS-7005. DFS input streams do not timeout. Contributed by Daryn Sharp. 
(kihwal: rev 6a84f88c1190a8fecadd81deb6e7b8a69675fa91)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> DFS input streams do not timeout
> --------------------------------
>
>                 Key: HDFS-7005
>                 URL: https://issues.apache.org/jira/browse/HDFS-7005
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 3.0.0, 2.5.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>             Fix For: 2.6.0
>
>         Attachments: HDFS-7005.patch
>
>
> Input streams lost their timeout.  The problem appears to be 
> {{DFSClient#newConnectedPeer}} does not set the read timeout.  During a 
> temporary network interruption the server will close the socket, unbeknownst 
> to the client host, which blocks on a read forever.
> The results are dire.  Services such as the RM, JHS, NMs, oozie servers, etc 
> all need to be restarted to recover - unless you want to wait many hours for 
> the tcp stack keepalive to detect the broken socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to