[ https://issues.apache.org/jira/browse/HDFS-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125971#comment-13125971 ]
Todd Lipcon commented on HDFS-140: ---------------------------------- Thanks for digging to see that this is fixed in trunk. So my question remains: though this is demonstrably a problem in the 20 series, is it causing any production issues? Since 20x a maintenance release series, I think we need some good justification that it's causing a production issue somewhere. Do others agree or am I being too paranoid? Dhruba/Nicholas? > When a file is deleted, its blocks remain in the blocksmap till the next > block report from Datanode > --------------------------------------------------------------------------------------------------- > > Key: HDFS-140 > URL: https://issues.apache.org/jira/browse/HDFS-140 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 0.20.1 > Reporter: dhruba borthakur > Assignee: Uma Maheswara Rao G > Fix For: 0.20.205.0 > > Attachments: HDFS-140.20security205.patch > > > When a file is deleted, the namenode sends out block deletions messages to > the appropriate datanodes. However, the namenode does not delete these blocks > from the blocksmap. Instead, the processing of the next block report from the > datanode causes these blocks to get removed from the blocksmap. > If we desire to make block report processing less frequent, this issue needs > to be addressed. Also, this introduces indeterministic behaviout to a a few > unit tests. Another factor to consider is to ensure that duplicate block > detection is not compromised. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira