[
https://issues.apache.org/jira/browse/HADOOP-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12598999#action_12598999
]
Hudson commented on HADOOP-3381:
--------------------------------
Integrated in Hadoop-trunk #499 (See
[http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/499/])
> INode interlinks can multiply effect of memory leaks
> ----------------------------------------------------
>
> Key: HADOOP-3381
> URL: https://issues.apache.org/jira/browse/HADOOP-3381
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.18.0
>
> Attachments: HADOOP-3381.patch, HADOOP-3381.patch
>
>
> Say a directory 'DIR' has a directory tree under it with 10000 files and
> directories. Each INode keeps refs to parent and children. When DIR is
> deleted, memory wise we essentially delete link from its parent (and delete
> the all the blocks from {{blocksMap}}). We don't modify its children. This is
> ok since this will form an island of references and will be gc-ed. Thats when
> everything is perfect. But if there is a bug that leaves a ref from a valid
> object (there is a suspect, I will another jira) to even one of these 10000
> files, it could hold up all the INode and related objects. This can make a
> smaller mem leak many times more severe.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.