[ 
https://issues.apache.org/jira/browse/HDFS-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399863#comment-16399863
 ] 

Vinayakumar B commented on HDFS-13288:
--------------------------------------

I missed the point "But the client process is a background service, it doesn't 
exit" before posting my above comment.
Namenode renews the lease for whole client, not per file. If the client is 
alive, there is no way for namenode to know that it has failed to close the 
file. Namenode thinks that file is still being written, so it will never 
attempt to recover, even if "harder" limit is applied.

> Why we don't add a harder lease expiration limit.
> -------------------------------------------------
>
>                 Key: HDFS-13288
>                 URL: https://issues.apache.org/jira/browse/HDFS-13288
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.6.5
>            Reporter: Igloo
>            Priority: Minor
>
> Currently there exists a soft expire timeout(1 minutes by default) and hard 
> expire timeout(60 minutes by default). 
> On our production environment. Some client began writing a file long 
> time(more than one year) ago, when writing finished and tried to close the 
> output stream, the client failed closing it (for some IOException. etc. ).  
> But the client process is a background service, it doesn't exit. So the lease 
> doesn't released for more than one year.
> The problem is that, the lease for the file is occupied, we have to call 
> recover lease on the file when doing demission or appending operation.
>  
> So I am wondering why we don't add a more harder lease expire timeout, when a 
> lease lasts too long (maybe one month),  revoke it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to