[
https://issues.apache.org/jira/browse/HDFS-140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uma Maheswara Rao G resolved HDFS-140.
--------------------------------------
Resolution: Won't Fix
As this is a improvement and not a serious issue for 1.X versions, I am marking
it as wont fix.
Also conformed that this issue is not there in trunk.
> When a file is deleted, its blocks remain in the blocksmap till the next
> block report from Datanode
> ---------------------------------------------------------------------------------------------------
>
> Key: HDFS-140
> URL: https://issues.apache.org/jira/browse/HDFS-140
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.20.1
> Reporter: dhruba borthakur
> Assignee: Uma Maheswara Rao G
> Attachments: HDFS-140.20security205.patch
>
>
> When a file is deleted, the namenode sends out block deletions messages to
> the appropriate datanodes. However, the namenode does not delete these blocks
> from the blocksmap. Instead, the processing of the next block report from the
> datanode causes these blocks to get removed from the blocksmap.
> If we desire to make block report processing less frequent, this issue needs
> to be addressed. Also, this introduces indeterministic behaviout to a a few
> unit tests. Another factor to consider is to ensure that duplicate block
> detection is not compromised.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira