haiyang1987 commented on code in PR #6597:
URL: https://github.com/apache/hadoop/pull/6597#discussion_r1522764907


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReconstructStripedBlocks.java:
##########
@@ -575,5 +576,80 @@ public void testReconstructionWithStorageTypeNotEnough() 
throws Exception {
       cluster.shutdown();
     }
   }
+  @Test
+  public void testDeleteOverReplicatedStripedBlock() throws Exception {
+    final HdfsConfiguration conf = new HdfsConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
+    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_CONSIDERLOAD_KEY,
+            false);
+    StorageType[][] st = new StorageType[groupSize + 2][1];
+    for (int i = 0; i < st.length-1; i++){
+      st[i] = new StorageType[]{StorageType.SSD};
+    }
+    st[st.length -1] = new StorageType[]{StorageType.DISK};
+
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(groupSize + 2)
+            .storagesPerDatanode(1)
+            .storageTypes(st)
+            .build();
+    cluster.waitActive();
+    DistributedFileSystem fs = cluster.getFileSystem();
+    fs.enableErasureCodingPolicy(
+            StripedFileTestUtil.getDefaultECPolicy().getName());
+    try {
+      fs.mkdirs(dirPath);
+      fs.setErasureCodingPolicy(dirPath,
+              StripedFileTestUtil.getDefaultECPolicy().getName());
+      fs.setStoragePolicy(dirPath, HdfsConstants.ALLSSD_STORAGE_POLICY_NAME);
+      DFSTestUtil.createFile(fs, filePath,
+              cellSize * dataBlocks * 2, (short) 1, 0L);
+      // Stop a dn
+      LocatedBlocks blks = 
fs.getClient().getLocatedBlocks(filePath.toString(), 0);
+      LocatedStripedBlock block = (LocatedStripedBlock) 
blks.getLastLocatedBlock();
+      DatanodeInfo dnToStop = block.getLocations()[0];
+
+      MiniDFSCluster.DataNodeProperties dnProp =
+              cluster.stopDataNode(dnToStop.getXferAddr());
+      cluster.stopDataNode(dnToStop.getXferAddr());
+      cluster.setDataNodeDead(dnToStop);
+
+      // Wait for reconstruction to happen
+      DFSTestUtil.waitForReplication(fs, filePath, groupSize, 15 * 1000);
+
+      DatanodeInfo dnToStop2 = block.getLocations()[1];
+      cluster.stopDataNode(dnToStop2.getXferAddr());
+      cluster.setDataNodeDead(dnToStop2);
+      DFSTestUtil.waitForReplication(fs, filePath, groupSize, 15 * 1000);
+
+      // Bring the dn back: 10 internal blocks now
+      cluster.restartDataNode(dnProp);
+      cluster.waitActive();
+      DFSTestUtil.verifyClientStats(conf, cluster);
+
+      // Currently namenode is able to track the missing block. And restart NN

Review Comment:
   here why need restart namenode?
   For Line[629-639] 
   ```
     for (DataNode dn : cluster.getDataNodes()) {
           DataNodeTestUtils.triggerBlockReport(dn);
         }
   
         BlockManager bm = cluster.getNamesystem().getBlockManager();
         GenericTestUtils.waitFor(()
                 -> bm.getPendingDeletionBlocksCount() == 0,
             10, 2000);
   
         for (DataNode dn : cluster.getDataNodes()) {
           DataNodeTestUtils.triggerHeartbeat(dn);
         }
   
         for (DataNode dn : cluster.getDataNodes()) {
           DataNodeTestUtils.triggerDeletionReport(dn);
       }
   ```
   Maybe update this logic to avoid using `Thread.sleep(3000)`, what do you 
think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to