[
https://issues.apache.org/jira/browse/HDFS-17555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17863129#comment-17863129
]
ASF GitHub Bot commented on HDFS-17555:
---------------------------------------
ayushtkn commented on code in PR #6894:
URL: https://github.com/apache/hadoop/pull/6894#discussion_r1666384647
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java:
##########
@@ -246,4 +246,25 @@ public void testNNThroughputWithBaseDir() throws Exception
{
}
}
}
+
+ /**
+ * This test runs {@link NNThroughputBenchmark} against a mini DFS cluster
+ * for blockSize with letter suffix.
+ */
+ @Test(timeout = 120000)
+ public void testNNThroughputForBlockSizeWithLetterSuffix() throws Exception {
+ final Configuration conf = new HdfsConfiguration();
+ conf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+ conf.set(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, "1m");
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).
+ numDataNodes(3).build()) {
Review Comment:
no need of ``numDataNodes(3)``, you don't need 3 datanodes, you will have 1
by default
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java:
##########
@@ -246,4 +246,25 @@ public void testNNThroughputWithBaseDir() throws Exception
{
}
}
}
+
+ /**
+ * This test runs {@link NNThroughputBenchmark} against a mini DFS cluster
+ * for blockSize with letter suffix.
+ */
+ @Test(timeout = 120000)
+ public void testNNThroughputForBlockSizeWithLetterSuffix() throws Exception {
+ final Configuration conf = new HdfsConfiguration();
+ conf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+ conf.set(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, "1m");
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).
+ numDataNodes(3).build()) {
+ cluster.waitActive();
+ final Configuration benchConf = new HdfsConfiguration();
+ benchConf.setLong(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+ benchConf.set(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, "1m");
+ FileSystem.setDefaultUri(benchConf, cluster.getURI());
+ NNThroughputBenchmark.runBenchmark(benchConf,
+ new String[]{"-op", "create", "-keepResults", "-files", "3",
"-close"});
Review Comment:
Should add a case where blockSize is specified as an argument
> Fix NumberFormatException of NNThroughputBenchmark when configured
> dfs.blocksize.
> ---------------------------------------------------------------------------------
>
> Key: HDFS-17555
> URL: https://issues.apache.org/jira/browse/HDFS-17555
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: benchmarks, hdfs
> Affects Versions: 3.3.5, 3.3.3, 3.3.4, 3.3.6
> Reporter: wangzhongwei
> Assignee: wangzhongwei
> Priority: Major
> Attachments: image-2024-06-20-19-17-10-099.png
>
>
> when using NNThroughputBenchmark, the configuration item dfs.blocksize
> in hdfs-site.xml is configured with a letter as the suffix,such as
> 256m,NumberFormatException occurred.
> command:
> hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs
> hdfs://xxxxx -op create -threads 100 -files 10000 -filesPerDir 100 -close
> !image-2024-06-20-19-17-10-099.png|width=631,height=202!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]