[ 
https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12702633#action_12702633
 ] 

dhruba borthakur commented on HADOOP-2757:
------------------------------------------

I am testing the case when client's hang if the server(s) go down. If the 
namenode goes down, the dfsclient fails to renew leases and marks clientRunning 
as false. This should cause close() to bail out.

In the case when all the datanode(s) in a pipeline go down but the namenode is 
alive, the close still hangs. is this the case you are referring to?

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>
>                 Key: HADOOP-2757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2757
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>         Attachments: softMount1.patch, softMount1.patch
>
>
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps 
> throwing {{NotYetReplicated}} exception, for whatever reason. Its pretty 
> annoying for a user. Shoud the loop inside close have a timeout? If so how 
> much? It could probably something like 10 minutes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to