Hexiaoqiao commented on code in PR #5913:
URL: https://github.com/apache/hadoop/pull/5913#discussion_r1285409083
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyBlockManagement.java:
##########
@@ -97,4 +97,58 @@ public void testInvalidateBlock() throws Exception {
}
}
+ /**
+ * Test Standby/Observer NameNode should not handle redundant replica block
logic
+ * when set decrease replication.
+ * @throws Exception
+ */
+ @Test(timeout = 60000)
+ public void testNotHandleRedundantReplica() throws Exception {
+ Configuration conf = new Configuration();
+ HAUtil.setAllowStandbyReads(conf, true);
+ conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+
+ // Create HA Cluster.
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+
.nnTopology(MiniDFSNNTopology.simpleHATopology()).numDataNodes(10).build()) {
Review Comment:
Here num of DataNode is 10, is it necessary? I think it is enough to set 4,
what do you think about?
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyBlockManagement.java:
##########
@@ -97,4 +97,58 @@ public void testInvalidateBlock() throws Exception {
}
}
+ /**
+ * Test Standby/Observer NameNode should not handle redundant replica block
logic
+ * when set decrease replication.
+ * @throws Exception
+ */
+ @Test(timeout = 60000)
+ public void testNotHandleRedundantReplica() throws Exception {
+ Configuration conf = new Configuration();
+ HAUtil.setAllowStandbyReads(conf, true);
+ conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+
+ // Create HA Cluster.
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+
.nnTopology(MiniDFSNNTopology.simpleHATopology()).numDataNodes(10).build()) {
+ cluster.waitActive();
+ cluster.transitionToActive(0);
+
+ NameNode nn1 = cluster.getNameNode(0);
+ assertEquals("ACTIVE", nn1.getNamesystem().getState().name());
+ NameNode nn2 = cluster.getNameNode(1);
+ assertEquals("STANDBY", nn2.getNamesystem().getState().name());
+
+ cluster.triggerHeartbeats();
+ // Sending the FBR.
+ cluster.triggerBlockReports();
+
+ // Default excessRedundancyMap size as 0.
+ assertEquals(0,
nn1.getNamesystem().getBlockManager().getExcessBlocksCount());
+ assertEquals(0,
nn2.getNamesystem().getBlockManager().getExcessBlocksCount());
+
+ FileSystem fs = HATestUtil.configureFailoverFs(cluster, conf);
+
+ // Create test file.
+ Path file = new Path("/test");
+ long fileLength = 512;
+ DFSTestUtil.createFile(fs, file, fileLength, (short) 8, 0L);
Review Comment:
Here set the replication to 4 is enough?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]