[ 
https://issues.apache.org/jira/browse/HDFS-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399841#comment-16399841
 ] 

Vinayakumar B commented on HDFS-13288:
--------------------------------------

Namenode have a thread {{LeaseManager.Monitor}} checks for the leases which are 
expired hard limit and triggers the lease recovery for those files.
If any exception in recovery of these lease, it (lease recovery from NN) should 
repeat every one hour (hard limit).

Can you check in logs, whether this  following log message available ?
{noformat}LOG.info("{} has expired hard limit", leaseToCheck); {noformat}

> Why we don't add a harder lease expiration limit.
> -------------------------------------------------
>
>                 Key: HDFS-13288
>                 URL: https://issues.apache.org/jira/browse/HDFS-13288
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.6.5
>            Reporter: Igloo
>            Priority: Minor
>
> Currently there exists a soft expire timeout(1 minutes by default) and hard 
> expire timeout(60 minutes by default). 
> On our production environment. Some client began writing a file long 
> time(more than one year) ago, when writing finished and tried to close the 
> output stream, the client failed closing it (for some IOException. etc. ).  
> But the client process is a background service, it doesn't exit. So the lease 
> doesn't released for more than one year.
> The problem is that, the lease for the file is occupied, we have to call 
> recover lease on the file when doing demission or appending operation.
>  
> So I am wondering why we don't add a more harder lease expire timeout, when a 
> lease lasts too long (maybe one month),  revoke it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to