[
https://issues.apache.org/jira/browse/HADOOP-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12596997#action_12596997
]
Hadoop QA commented on HADOOP-3382:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12382065/HADOOP-3382.patch
against trunk revision 656480.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified
tests.
Please justify why no tests are needed for this patch.
-1 patch. The patch command could not apply the patch.
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2474/console
This message is automatically generated.
> Memory leak when files are not cleanly closed
> ---------------------------------------------
>
> Key: HADOOP-3382
> URL: https://issues.apache.org/jira/browse/HADOOP-3382
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.15.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Priority: Blocker
> Fix For: 0.17.0
>
> Attachments: HADOOP-3382.patch, HADOOP-3382.patch, HADOOP-3382.patch,
> memleak.txt
>
>
> {{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open
> for writing but not cleanly closed. e.g. when client invokes
> {{abandonFileInProgress()}} or when lease expires. It deletes the last block
> if it has a length of zero. The block is deleted from the file INode but not
> from {{blocksMap}}. Then leaves a reference to such file until NameNode is
> restarted. When this happens HADOOP-3381 multiplies amount of memory leak.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.