[ 
https://issues.apache.org/jira/browse/HADOOP-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714225#action_12714225
 ] 

Suresh Srinivas commented on HADOOP-5724:
-----------------------------------------

I propose the following:
* When a file is deleted, remove the blocks that belong to the file from 
{{BlockManager.blocksMap}} and add it to {{BlockManager.recentInvalidateSets}}. 
Any subsequent attempt at adding a block (say triggered by block report) will 
fail, since the corresponding file does not exist. Also lingering blocks in 
datanodes will not cause issues since two block IDs generated at different 
times will be the different due to generation stamp. I will create a new jira 
for making this change.
* Close this jira since the issue is no longer valid

> Datanode should report deletion of blocks to Namenode explicitly
> ----------------------------------------------------------------
>
>                 Key: HADOOP-5724
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5724
>             Project: Hadoop Core
>          Issue Type: Bug
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>             Fix For: 0.21.0
>
>         Attachments: blockdel.patch, blockdel.patch
>
>
> Currently datanode notifies namenode newly added blocks and the blocks that 
> are corrupt. There is no explicit message from the datanode to the namenode 
> to indicate the deletion of blocks. Block reports from the datanode is the 
> only way for the namenode to learn about the deletion of blocks at a 
> datanode. With the addition of explicit request to indicate to block 
> deletion, block report interval (which is currently 1 hour) can be increased 
> to a longer duration. This reduces load on both namenode and datanodes.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to