[
https://issues.apache.org/jira/browse/HADOOP-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644165#action_12644165
]
Hairong Kuang commented on HADOOP-4540:
---------------------------------------
My proposal is to remove a replica from the blocks map when it is marked as
"invalid" (i.e., when it is moved to the recentInvalidateSet) as a result of
over-replication. Also when a block report comes in, and a new replica is found
but it is marked as invalid, this new replica does not get added to the blocks
map.
> An invalidated block should be removed from the blockMap
> --------------------------------------------------------
>
> Key: HADOOP-4540
> URL: https://issues.apache.org/jira/browse/HADOOP-4540
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.17.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Priority: Blocker
> Fix For: 0.18.2
>
>
> Currently when a namenode schedules to delete an over-replicated block, the
> replica to be deleted does not get removed the block map immediately. Instead
> it gets removed when the next block report to comes in. This causes three
> problems:
> 1. getBlockLocations may return locations that do not contain the block;
> 2. Over-replication due to unsuccessful deletion can not be detected as
> described in HADOOP-4477.
> 3. The number of blocks shown on dfs Web UI does not get updated on a source
> node when a large number of blocks have been moved from the source node to a
> target node, for example, when running a balancer.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.