Block allocation method does not check pendingCreates for duplicate block ids
-----------------------------------------------------------------------------

                 Key: HADOOP-1444
                 URL: https://issues.apache.org/jira/browse/HADOOP-1444
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
            Reporter: dhruba borthakur


The HDFS namenode allocates a new random blockid when requested. It then checks 
the blocksMap to verify if this blockid is already in use. If this block is is 
already in use, it generates another random number and above process continues. 
When a blocksid that does not exist in the blocksMap is found, it stores this 
blocksid in pendingCreateBlocks and returns the blocksid to the requesting 
client.

The above check for detecting duplicate blockid should check 
pendingCreateBlocks as well.

A related problem exists when a file is deleted. Deleting a file causes all its 
blocks to be deleted from the blocksMap immediately. These blockids move to 
recentInvalidateSets and are sent out to the corresponding datanodes as part of 
responses to succeeding heartbeats. So, there is a time window when a block 
exists in the datanode but not in the blocksMap. At this time, if the 
blockid-random-number generator generates a blockid that exists in the datanode 
but not on the blocksMap, then the namenode will fail to detect that this is a 
duplicate blockid.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to