[ 
https://issues.apache.org/jira/browse/HADOOP-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597119#action_12597119
 ] 

Hudson commented on HADOOP-3382:
--------------------------------

Integrated in Hadoop-trunk #492 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/492/])

> Memory leak when files are not cleanly closed
> ---------------------------------------------
>
>                 Key: HADOOP-3382
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3382
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.15.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-3382.patch, HADOOP-3382.patch, HADOOP-3382.patch, 
> memleak.txt
>
>
> {{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open 
> for writing but not cleanly closed. e.g. when client invokes 
> {{abandonFileInProgress()}} or when lease expires. It deletes the last block 
> if it has a length of zero. The block is deleted from the file INode but not 
> from {{blocksMap}}. Then leaves a reference to such file until NameNode is 
> restarted. When this happens  HADOOP-3381 multiplies amount of memory leak.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to