[ 
https://issues.apache.org/jira/browse/HADOOP-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597690#action_12597690
 ] 

Hadoop QA commented on HADOOP-3381:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12382217/HADOOP-3381.patch
  against trunk revision 656939.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no tests are needed for this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2494/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2494/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2494/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2494/console

This message is automatically generated.

> INode interlinks can multiply effect of memory leaks
> ----------------------------------------------------
>
>                 Key: HADOOP-3381
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3381
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3381.patch, HADOOP-3381.patch
>
>
> Say a directory 'DIR' has a directory tree under it with 10000 files and 
> directories. Each INode keeps refs to parent and children. When DIR is 
> deleted, memory wise we essentially delete link from its parent (and delete 
> the all the blocks from {{blocksMap}}). We don't modify its children. This is 
> ok since this will form an island of references and will be gc-ed. Thats when 
> everything is perfect. But if there is a bug that leaves a ref from a valid 
> object (there is a suspect, I will another jira) to even one of these 10000 
> files, it could hold up all the INode and related objects. This can make a 
> smaller mem leak many times more severe.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to