An invalidated block should be removed from the blockMap
--------------------------------------------------------

                 Key: HADOOP-4540
                 URL: https://issues.apache.org/jira/browse/HADOOP-4540
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.18.0
            Reporter: Hairong Kuang


Currently when a namenode schedules to delete an over-replicated block, the 
replica to be deleted does not get removed the block map immediately. Instead 
it gets removed when the next block report to comes in. This causes three 
problems: 
1. getBlockLocations may return locations that do not contain the block;
2. Over-replication due to unsuccessful deletion can not be detected as 
described in HADOOP-4477.
3. The number of blocks shown on dfs Web UI does not get updated on a source 
node when a large number of blocks have been moved from the source node to a 
target node, for example, when running a balancer.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to