[
https://issues.apache.org/jira/browse/HDFS-17109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18029923#comment-18029923
]
ASF GitHub Bot commented on HDFS-17109:
---------------------------------------
github-actions[bot] closed pull request #6046: HDFS-17109. Null Pointer
Exception when running TestBlockManager
URL: https://github.com/apache/hadoop/pull/6046
> Null Pointer Exception when running TestBlockManager
> ----------------------------------------------------
>
> Key: HDFS-17109
> URL: https://issues.apache.org/jira/browse/HDFS-17109
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: ConfX
> Priority: Critical
> Labels: pull-request-available
> Attachments: reproduce.sh
>
>
> h2. What happened
> After setting {{{}dfs.namenode.redundancy.considerLoadByStorageType=true{}}},
> running test
> {{org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager#testOneOfTwoRacksDecommissioned}}
> results in a {{{}NullPointerException{}}}.
> h2. Where's the bug
> In the class {{{}BlockPlacementPolicyDefault{}}}:
> {noformat}
> for (StorageType s : storageTypes) {
> StorageTypeStats storageTypeStats = storageStats.get(s);
> numNodes += storageTypeStats.getNodesInService();
> numXceiver += storageTypeStats.getNodesInServiceXceiverCount();
> }{noformat}
> However, the class does not check if the storageTypeStats is null, causing
> the NPE.
> h2. How to reproduce
> # Set {{dfs.namenode.redundancy.considerLoadByStorageType=true}}
> # Run
> {{org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager#testOneOfTwoRacksDecommissioned}}
> and the following exception should be observed:
> {noformat}
> java.lang.NullPointerException
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getInServiceXceiverAverageByStorageType(BlockPlacementPolicyDefault.java:1044)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getInServiceXceiverAverage(BlockPlacementPolicyDefault.java:1023)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.excludeNodeByLoad(BlockPlacementPolicyDefault.java:1000)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.isGoodDatanode(BlockPlacementPolicyDefault.java:1086)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:855)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:782)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:557)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:478)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:350)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:170)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:2031)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.scheduleSingleReplication(TestBlockManager.java:641)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestOneOfTwoRacksDecommissioned(TestBlockManager.java:364)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testOneOfTwoRacksDecommissioned(TestBlockManager.java:351){noformat}
>
> For an easy reproduction, run the reproduce.sh in the attachment.
> We are happy to provide a patch if this issue is confirmed.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]