[
https://issues.apache.org/jira/browse/HDFS-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056945#comment-14056945
]
Hadoop QA commented on HDFS-4504:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12599081/HDFS-4504.016.patch
against trunk revision .
{color:red}-1 patch{color}. The patch command could not apply the patch.
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7313//console
This message is automatically generated.
> DFSOutputStream#close doesn't always release resources (such as leases)
> -----------------------------------------------------------------------
>
> Key: HDFS-4504
> URL: https://issues.apache.org/jira/browse/HDFS-4504
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-4504.001.patch, HDFS-4504.002.patch,
> HDFS-4504.007.patch, HDFS-4504.008.patch, HDFS-4504.009.patch,
> HDFS-4504.010.patch, HDFS-4504.011.patch, HDFS-4504.014.patch,
> HDFS-4504.015.patch, HDFS-4504.016.patch
>
>
> {{DFSOutputStream#close}} can throw an {{IOException}} in some cases. One
> example is if there is a pipeline error and then pipeline recovery fails.
> Unfortunately, in this case, some of the resources used by the
> {{DFSOutputStream}} are leaked. One particularly important resource is file
> leases.
> So it's possible for a long-lived HDFS client, such as Flume, to write many
> blocks to a file, but then fail to close it. Unfortunately, the
> {{LeaseRenewerThread}} inside the client will continue to renew the lease for
> the "undead" file. Future attempts to close the file will just rethrow the
> previous exception, and no progress can be made by the client.
--
This message was sent by Atlassian JIRA
(v6.2#6252)