[
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14954147#comment-14954147
]
Jing Zhao commented on HDFS-9205:
---------------------------------
# Nit: need to fix the javadoc of {{UnderReplicatedBlocks}},
"getPriority(BlockInfo, int, int, int)" should be updated to
"getPriority(BlockInfo, int, int, int, int)".
# Minor: since the iterator of the LightWeightLinkedSet already correctly
throws NoSuchElementException when there is no next element, it may not be
necessary to do the hasNext check.
{code}
public BlockInfo next() {
if (!hasNext()) {
throw new NoSuchElementException();
}
return b.next();
}
{code}
+1 after addressing these.
> Do not schedule corrupt blocks for replication
> ----------------------------------------------
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Tsz Wo Nicholas Sze
> Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch,
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence,
> they cannot be replicated. In UnderReplicatedBlocks, there is a queue for
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks
> from it. It seems that scheduling corrupted block for replication is wasting
> resource and potentially slow down replication for the higher priority blocks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)