[ 
https://issues.apache.org/jira/browse/HDFS-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17891970#comment-17891970
 ] 

ASF GitHub Bot commented on HDFS-17641:
---------------------------------------

PrateekSane commented on code in PR #7123:
URL: https://github.com/apache/hadoop/pull/7123#discussion_r1811232586


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestLowRedundancyBlockQueues.java:
##########
@@ -177,50 +179,58 @@ public void testBlockPriorities() throws Throwable {
     BlockInfo block_very_low_redundancy = genBlockInfo(3);
     BlockInfo block_corrupt = genBlockInfo(4);
     BlockInfo block_corrupt_repl_one = genBlockInfo(5);
+    BlockInfo block_badly_distributed = genBlockInfo(6);
 
     // Add a block with a single entry
     assertAdded(queues, block1, 1, 0, 3);
     assertInLevel(queues, block1, LowRedundancyBlocks.QUEUE_HIGHEST_PRIORITY);
-    verifyBlockStats(queues, 1, 0, 0, 0, 0, 1, 0);
+    verifyBlockStats(queues, 1, 0, 0, 0, 0, 1, 0, 0);
 
     // Repeated additions fail
     assertFalse(queues.add(block1, 1, 0, 0, 3));
-    verifyBlockStats(queues, 1, 0, 0, 0, 0, 1, 0);
+    verifyBlockStats(queues, 1, 0, 0, 0, 0, 1, 0, 0);
 
     // Add a second block with two replicas
     assertAdded(queues, block2, 2, 0, 3);
     assertInLevel(queues, block2, LowRedundancyBlocks.QUEUE_LOW_REDUNDANCY);
-    verifyBlockStats(queues, 2, 0, 0, 0, 0, 1, 0);
+    verifyBlockStats(queues, 2, 0, 0, 0, 0, 1, 0, 0);
 
     // Now try to add a block that is corrupt
     assertAdded(queues, block_corrupt, 0, 0, 3);
     assertInLevel(queues, block_corrupt,
                   LowRedundancyBlocks.QUEUE_WITH_CORRUPT_BLOCKS);
-    verifyBlockStats(queues, 2, 1, 0, 0, 0, 1, 0);
+    verifyBlockStats(queues, 2, 1, 0, 0, 0, 1, 0, 0);
 
     // Insert a very insufficiently redundancy block
     assertAdded(queues, block_very_low_redundancy, 4, 0, 25);
     assertInLevel(queues, block_very_low_redundancy,
                   LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY);
-    verifyBlockStats(queues, 3, 1, 0, 0, 0, 1, 0);
+    verifyBlockStats(queues, 3, 1, 0, 0, 0, 1, 0, 0);
 
     // Insert a corrupt block with replication factor 1
     assertAdded(queues, block_corrupt_repl_one, 0, 0, 1);
-    verifyBlockStats(queues, 3, 2, 1, 0, 0, 1, 0);
+    verifyBlockStats(queues, 3, 2, 1, 0, 0, 1, 0, 0);
 
     // Bump up the expected count for corrupt replica one block from 1 to 3
     queues.update(block_corrupt_repl_one, 0, 0, 0, 3, 0, 2);
-    verifyBlockStats(queues, 3, 2, 0, 0, 0, 1, 0);
+    verifyBlockStats(queues, 3, 2, 0, 0, 0, 1, 0, 0);
 
     // Reduce the expected replicas to 1
     queues.update(block_corrupt, 0, 0, 0, 1, 0, -2);
-    verifyBlockStats(queues, 3, 2, 1, 0, 0, 1, 0);
+    verifyBlockStats(queues, 3, 2, 1, 0, 0, 1, 0, 0);
     queues.update(block_very_low_redundancy, 0, 0, 0, 1, -4, -24);
-    verifyBlockStats(queues, 2, 3, 2, 0, 0, 1, 0);
+    verifyBlockStats(queues, 2, 3, 2, 0, 0, 1, 0, 0);
 
     // Reduce the expected replicas to 1 for block1
     queues.update(block1, 1, 0, 0, 1, 0, 0);
-    verifyBlockStats(queues, 2, 3, 2, 0, 0, 0, 0);
+    // expect 1 badly distributed block
+    verifyBlockStats(queues, 2, 3, 2, 0, 0, 0, 0, 1);
+
+    // insert a block with too many replicas to make badly distributed
+    assertAdded(queues, block_badly_distributed, 2, 0, 1);

Review Comment:
   That is true, however I think it might be the way to test it as in the 
BlockQueue level it is the only type which has more replicas than required.
   
   From discussion, thoughts are that the impl is assuming nothing calls 
getPriority to begin with unless it needs replication, and if it has sufficient 
replicas then that must be because it's badly distributed.





> Add badly distributed blocks metric for low redundancy blocks 
> --------------------------------------------------------------
>
>                 Key: HDFS-17641
>                 URL: https://issues.apache.org/jira/browse/HDFS-17641
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Prateek Sane
>            Priority: Minor
>              Labels: pull-request-available
>
> Low Redundancy blocks have different priority levels including highest 
> priority, very low redundancy, low redundancy, badly distributed, and corrupt.
> Having a metric for the number of badly distributed blocks would be helpful.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to