[
https://issues.apache.org/jira/browse/HDFS-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17427590#comment-17427590
]
JiangHua Zhu commented on HDFS-16269:
-------------------------------------
When using the new patch, similar exceptions will not appear.
Here are some effects:
./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs
xxxx -op blockReport -datanodes 3 -reports 3
Result:
21/10/12 17:05:09 INFO namenode.NNThroughputBenchmark: Starting benchmark:
blockReport
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: Creating 10 files with
10 blocks each.
21/10/12 17:05:10 FATAL namenode.NNThroughputBenchmark: Log level = ERROR
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: Starting 3
blockReport(s).
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark:
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: --- blockReport inputs
---
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: reports = 3
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: datanodes = 3 (100, 54,
58)
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: blocksPerReport = 100
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: blocksPerFile = 10
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: --- blockReport stats
---
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: # operations: 3
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: Elapsed Time: 8
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: Ops per sec: 375.0
21/10/12 17:05:10 INFO namenode.NNThroughputBenchmark: Average Time: 7
> [Fix] Improve NNThroughputBenchmark#blockReport operation
> ---------------------------------------------------------
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: benchmarks, namenode
> Affects Versions: 2.9.2
> Reporter: JiangHua Zhu
> Assignee: JiangHua Zhu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs
> xxxx -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark:
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark:
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Checked some code and found that the problem appeared here.
> private ExtendedBlock addBlocks(String fileName, String clientName)
> throws IOException {
> for(DatanodeInfo dnInfo: loc.getLocations()) {
> int dnIdx = dnInfo.getXferPort()-1;
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
> }
> }
> It can be seen from this that what dnInfo.getXferPort() gets is a port
> information and should not be used as an index of an array.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]