Hexiaoqiao commented on code in PR #6911:
URL: https://github.com/apache/hadoop/pull/6911#discussion_r1661073434
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java:
##########
@@ -462,6 +462,59 @@ public void testFileChecksumAfterDecommission() throws
Exception {
fileChecksum1.equals(fileChecksum2));
}
+ /**
+ * Test decommission when DN marked as busy.
+ * @throwsException
+ */
+ @Test(timeout = 120000)
+ public void testBusyAfterDecommissionNode() throws Exception {
+ byte busyDNIndex = 0;
+ //1. create EC file
+ final Path ecFile = new Path(ecDir, "testBusyAfterDecommissionNode");
+ int writeBytes = cellSize * dataBlocks;
+ writeStripedFile(dfs, ecFile, writeBytes);
+ Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
+ FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
+
+ //2. make once DN busy
+ final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
+ .getINode4Write(ecFile.toString()).asFile();
+ BlockInfo firstBlock = fileNode.getBlocks()[0];
+ DatanodeStorageInfo[] dnStorageInfos = bm.getStorages(firstBlock);
+ DatanodeDescriptor busyNode =
+ dnStorageInfos[busyDNIndex].getDatanodeDescriptor();
+ for (int j = 0; j < replicationStreamsHardLimit; j++) {
+ busyNode.incrementPendingReplicationWithoutTargets();
+ }
+
+ //3. decomission one node
+ List<DatanodeInfo> decommisionNodes = new ArrayList<>();
+ decommisionNodes.add(busyNode);
+ decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSION_INPROGRESS);
+
+ final List<DatanodeDescriptor> live = new ArrayList<DatanodeDescriptor>();
+ bm.getDatanodeManager().fetchDatanodes(live, null, false);
+ int liveDecommissioning = 0;
+ for (DatanodeDescriptor node : live) {
+ liveDecommissioning += node.isDecommissionInProgress() ? 1 : 0;
+ }
+ assertEquals(decommisionNodes.size(), liveDecommissioning);
+
+ //4. wait for decommission block to replicate
+ Thread.sleep(3000);
Review Comment:
What about to use `GenericTestUtils.waitFor` rather than `Thread.sleep`?
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java:
##########
@@ -462,6 +462,59 @@ public void testFileChecksumAfterDecommission() throws
Exception {
fileChecksum1.equals(fileChecksum2));
}
+ /**
+ * Test decommission when DN marked as busy.
+ * @throwsException
+ */
+ @Test(timeout = 120000)
+ public void testBusyAfterDecommissionNode() throws Exception {
+ byte busyDNIndex = 0;
Review Comment:
Any consideration when define byte type for index here? Not blocker just out
of interest.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]