[
https://issues.apache.org/jira/browse/HDFS-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570354#comment-16570354
]
Kitti Nanasi commented on HDFS-13658:
-------------------------------------
Thanks for the comments [~xiaochen]! I fixed them in the latest patch.
About TestLowRedundancyBlockQueues#doTestStripedBlockPriorities, the first
block in the loop will get into the highest priority queue, because the
LowRedundancyBlocks#add is invoked with curReplicas=1, and the other blocks
will not get to the highest priority queue, because the add is called with
curReplicas=1+i where i is more than 0. Adding the corrupt blocks does not
increase the metric, it stays one, because the blocks added in the loop are not
removed. Does this answer your question?
> fsck, dfsadmin -report, and NN WebUI should report number of blocks that have
> 1 replica
> ---------------------------------------------------------------------------------------
>
> Key: HDFS-13658
> URL: https://issues.apache.org/jira/browse/HDFS-13658
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs
> Affects Versions: 3.1.0
> Reporter: Kitti Nanasi
> Assignee: Kitti Nanasi
> Priority: Major
> Attachments: HDFS-13658.001.patch, HDFS-13658.002.patch,
> HDFS-13658.003.patch, HDFS-13658.004.patch, HDFS-13658.005.patch,
> HDFS-13658.006.patch, HDFS-13658.007.patch, HDFS-13658.008.patch,
> HDFS-13658.009.patch, HDFS-13658.010.patch
>
>
> fsck, dfsadmin -report, and NN WebUI should report number of blocks that have
> 1 replica. We have had many cases opened in which a customer has lost a disk
> or a DN losing files/blocks due to the fact that they had blocks with only 1
> replica. We need to make the customer better aware of this situation and that
> they should take action.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]