[
https://issues.apache.org/jira/browse/HDFS-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13640937#comment-13640937
]
Todd Lipcon commented on HDFS-4504:
-----------------------------------
Does this fully solve the problem, given that leases are per-client, not
per-file? ie so long as the long-lived client has any other open files for
write, it will keep calling {{renewLease()}} and the file will be stuck open
and un-recovered forever.
> DFSOutputStream#close doesn't always release resources (such as leases)
> -----------------------------------------------------------------------
>
> Key: HDFS-4504
> URL: https://issues.apache.org/jira/browse/HDFS-4504
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-4504.001.patch, HDFS-4504.002.patch
>
>
> {{DFSOutputStream#close}} can throw an {{IOException}} in some cases. One
> example is if there is a pipeline error and then pipeline recovery fails.
> Unfortunately, in this case, some of the resources used by the
> {{DFSOutputStream}} are leaked. One particularly important resource is file
> leases.
> So it's possible for a long-lived HDFS client, such as Flume, to write many
> blocks to a file, but then fail to close it. Unfortunately, the
> {{LeaseRenewerThread}} inside the client will continue to renew the lease for
> the "undead" file. Future attempts to close the file will just rethrow the
> previous exception, and no progress can be made by the client.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira