[
https://issues.apache.org/jira/browse/HDFS-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15065194#comment-15065194
]
Rui Li commented on HDFS-8461:
------------------------------
Thanks [~zhz] for your input. In
{{TestUnderReplicatedBlockQueues::doTestStripedBlockPriorities}}, we expect the
block in {{QUEUE_VERY_UNDER_REPLICATED}} if {{curReplicas - dataBlkNum == 1}}.
So we should change the test to comply with the 1/3 logic?
> Erasure coding: fix priority level of UnderReplicatedBlocks for striped block
> -----------------------------------------------------------------------------
>
> Key: HDFS-8461
> URL: https://issues.apache.org/jira/browse/HDFS-8461
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Walter Su
> Assignee: Walter Su
> Fix For: HDFS-7285
>
> Attachments: HDFS-8461-HDFS-7285.001.patch,
> HDFS-8461-HDFS-7285.002.patch
>
>
> Issues 1: correctly mark corrupted blocks.
> Issues 2: distinguish highest risk priority and normal risk priority.
> {code:title=UnderReplicatedBlocks.java}
> private int getPriority(int curReplicas,
> ...
> } else if (curReplicas == 1) {
> //only on replica -risk of loss
> // highest priority
> return QUEUE_HIGHEST_PRIORITY;
> ...
> {code}
> For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas
> == 6( Suppose 6+3 schema).
> That's important. Because
> {code:title=BlockManager.java}
> DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
> ...
> if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
> && !node.isDecommissionInProgress()
> && node.getNumberOfBlocksToBeReplicated() >= maxReplicationStreams)
> {
> continue; // already reached replication limit
> }
> ...
> {code}
> It may return not enough source DNs ( maybe 5), and failed to recover.
> A busy node should not be skiped if a block has highest risk/priority. The
> issue is the striped block doesn't have priority.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)