[ 
https://issues.apache.org/jira/browse/HADOOP-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12596660#action_12596660
 ] 

rangadi edited comment on HADOOP-3382 at 5/14/08 12:43 AM:
----------------------------------------------------------------

The patch that is tested manually is attached. As my comment in the patch 
describes, we do different things when a block is removed in different context. 
What is done a block is removed from NameNode should be in one place. This 
patch only fixes the observed leak.

I was thinking of writing unit test using abandonFileInProgress() but it is 
already deprecated. Since it is fairly simple patch and tested manually, it is 
probably ok not to have a unit test.

      was (Author: rangadi):
    The patch that is tested manually is attached. As my comment in the patch 
describes, we do different things when a block is removed in different context. 
What is done a block is removed from NameNode should be in one place. This 
patch only fixes the observed leak.

I was thinking of writing unit test using abandonFileInProgress() but it is 
already deprecated. Since it is fairly simple patch and tested manually, no 
unit test is required.
  
> Memory leak when files are not cleanly closed
> ---------------------------------------------
>
>                 Key: HADOOP-3382
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3382
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-3382.patch, memleak.txt
>
>
> {{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open 
> for writing but not cleanly closed. e.g. when client invokes 
> {{abandonFileInProgress()}} or when lease expires. It deletes the last block 
> if it has a length of zero. The block is deleted from the file INode but not 
> from {{blocksMap}}. Then leaves a reference to such file until NameNode is 
> restarted. When this happens  HADOOP-3381 multiplies amount of memory leak.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to