[
https://issues.apache.org/jira/browse/HADOOP-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12596848#action_12596848
]
Raghu Angadi commented on HADOOP-3382:
--------------------------------------
> Should we also remove all datanodes from BlockInfo?
Possibly. Please see my comment in the code and in jira above. There is is
explicit and/or consistent policy regd what needs to be cleaned up. Pretty much
in all these cases datanodes would not completed the block. I will add cleaning
up the datanodes part.
> Memory leak when files are not cleanly closed
> ---------------------------------------------
>
> Key: HADOOP-3382
> URL: https://issues.apache.org/jira/browse/HADOOP-3382
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Priority: Blocker
> Fix For: 0.17.0
>
> Attachments: HADOOP-3382.patch, HADOOP-3382.patch, memleak.txt
>
>
> {{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open
> for writing but not cleanly closed. e.g. when client invokes
> {{abandonFileInProgress()}} or when lease expires. It deletes the last block
> if it has a length of zero. The block is deleted from the file INode but not
> from {{blocksMap}}. Then leaves a reference to such file until NameNode is
> restarted. When this happens HADOOP-3381 multiplies amount of memory leak.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.