[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536553#comment-14536553
 ] 

Hudson commented on HDFS-8311:
------------------------------

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DataStreamer.transfer() should timeout the socket InputStream.
> --------------------------------------------------------------
>
>                 Key: HDFS-8311
>                 URL: https://issues.apache.org/jira/browse/HDFS-8311
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>            Reporter: Esteban Gutierrez
>            Assignee: Esteban Gutierrez
>             Fix For: 2.8.0
>
>         Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to