[
https://issues.apache.org/jira/browse/HDFS-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067124#comment-15067124
]
Rushabh S Shah commented on HDFS-9586:
--------------------------------------
FSNameSystem#listCorruptFileBlocks gets the list of corrupt blocks from
UnderReplicatedBlocks.QUEUE_WITH_CORRUPT_BLOCKS queue.
According to below code, the block will be added into QUEUE_WITH_CORRUPT_BLOCKS
queue only if there are zero decommissionedReplicas (This name is little
confusing since this is the sum of decommissioning and decommissioned replicas).
{noformat}
if (curReplicas == 0) {
// If there are zero non-decommissioned replicas but there are
// some decommissioned replicas, then assign them highest priority
if (decommissionedReplicas > 0) {
return QUEUE_HIGHEST_PRIORITY;
}
if (readOnlyReplicas > 0) {
// only has read-only replicas, highest risk
// since the read-only replicas may go down all together.
return QUEUE_HIGHEST_PRIORITY;
}
//all we have are corrupt blocks
return QUEUE_WITH_CORRUPT_BLOCKS;
{noformat}
So all the blocks that go into QUEUE_WITH_CORRUPT_BLOCKS already has zero
decommissioning replicas.
Please correct me if my understanding is wrong.
> listCorruptFileBlocks should not output files that all replications are
> decommissioning
> ---------------------------------------------------------------------------------------
>
> Key: HDFS-9586
> URL: https://issues.apache.org/jira/browse/HDFS-9586
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Phil Yang
> Assignee: Phil Yang
> Attachments: 9586-v1.patch
>
>
> As HDFS-7933 said, we should count decommissioning and decommissioned nodes
> respectively and regard decommissioning nodes as special live nodes whose
> file is not corrupt or missing.
> So in listCorruptFileBlocks which is used by fsck and HDFS namenode website,
> we should collect a corrupt file only if liveReplicas and decommissioning are
> both 0.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)