[ https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12713259#action_12713259 ]
Hairong Kuang commented on HADOOP-2757: --------------------------------------- > As an administrator of a cluster, I find it easier to set a time limit for a > rpc conection to bail out if it is not receiving response data continuously. I am not sure if all administrators want this. This is going to revert what we did in HADOOP-2188. IPC client already has a configured read timeout. If you do want a timeout on read, maybe it is better to have a configuration setting if the client needs a Ping or not. There is no need to have multiple read timeout configurations. Suppose RPC can fail on SocketTimeoutException, why you need a timeout on a single RPC? Why can't the client close the lease on SocketTimeoutException? I think one timeout and a hard retry limitation on close will serve your purpose well. Why we need so many different layers of timeout? Maybe I missed something. BTW, the configurations inactivity.timeout and softmount.timeout are not general at all. One is only for leasechecker and anotther is only for close. > Should DFS outputstream's close wait forever? > --------------------------------------------- > > Key: HADOOP-2757 > URL: https://issues.apache.org/jira/browse/HADOOP-2757 > Project: Hadoop Core > Issue Type: Improvement > Components: dfs > Reporter: Raghu Angadi > Assignee: dhruba borthakur > Attachments: softMount1.patch, softMount1.patch, softMount2.patch, > softMount3.patch > > > Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps > throwing {{NotYetReplicated}} exception, for whatever reason. Its pretty > annoying for a user. Shoud the loop inside close have a timeout? If so how > much? It could probably something like 10 minutes. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.