[ 
https://issues.apache.org/jira/browse/HADOOP-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12596723#action_12596723
 ] 

Hadoop QA commented on HADOOP-3382:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12382016/HADOOP-3382.patch
  against trunk revision 656153.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no tests are needed for this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2463/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2463/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2463/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2463/console

This message is automatically generated.

> Memory leak when files are not cleanly closed
> ---------------------------------------------
>
>                 Key: HADOOP-3382
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3382
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-3382.patch, HADOOP-3382.patch, memleak.txt
>
>
> {{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open 
> for writing but not cleanly closed. e.g. when client invokes 
> {{abandonFileInProgress()}} or when lease expires. It deletes the last block 
> if it has a length of zero. The block is deleted from the file INode but not 
> from {{blocksMap}}. Then leaves a reference to such file until NameNode is 
> restarted. When this happens  HADOOP-3381 multiplies amount of memory leak.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to