[
https://issues.apache.org/jira/browse/HDFS-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17751911#comment-17751911
]
ASF GitHub Bot commented on HDFS-17093:
---------------------------------------
Hexiaoqiao commented on code in PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#discussion_r1286669004
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -2957,6 +2957,22 @@ public boolean processReport(final DatanodeID nodeID,
return !node.hasStaleStorages();
}
+ /**
+ * Remove the DN lease only when we have received block reports
+ * for all storages for a particular DN.
Review Comment:
Fix checkstyle.
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java:
##########
@@ -269,4 +271,85 @@ private StorageBlockReport[]
createReports(DatanodeStorage[] dnStorages,
}
return storageBlockReports;
}
+
+ @Test(timeout = 360000)
+ public void testFirstIncompleteBlockReport() throws Exception {
+ HdfsConfiguration conf = new HdfsConfiguration();
+ Random rand = new Random();
+
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+ .numDataNodes(1).build()) {
+ cluster.waitActive();
+
+ FSNamesystem fsn = cluster.getNamesystem();
+
+ NameNode nameNode = cluster.getNameNode();
+ // pretend to be in safemode
Review Comment:
The first letter need to be uppercase and end with period at the end of
sentence.
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java:
##########
@@ -269,4 +271,85 @@ private StorageBlockReport[]
createReports(DatanodeStorage[] dnStorages,
}
return storageBlockReports;
}
+
+ @Test(timeout = 360000)
+ public void testFirstIncompleteBlockReport() throws Exception {
+ HdfsConfiguration conf = new HdfsConfiguration();
+ Random rand = new Random();
+
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+ .numDataNodes(1).build()) {
+ cluster.waitActive();
+
+ FSNamesystem fsn = cluster.getNamesystem();
+
+ NameNode nameNode = cluster.getNameNode();
+ // pretend to be in safemode
+ NameNodeAdapter.enterSafeMode(nameNode, false);
+
+ BlockManager blockManager = fsn.getBlockManager();
+ BlockManager spyBlockManager = spy(blockManager);
+ fsn.setBlockManagerForTesting(spyBlockManager);
+ String poolId = cluster.getNamesystem().getBlockPoolId();
+
+ NamenodeProtocols rpcServer = cluster.getNameNodeRpc();
+
+ // Test based on one DataNode report to Namenode
+ DataNode dn = cluster.getDataNodes().get(0);
+ DatanodeDescriptor datanodeDescriptor = spyBlockManager
+ .getDatanodeManager().getDatanode(dn.getDatanodeId());
Review Comment:
Keep the same align format as the define. Here we should leave four blank
space only (the same issue at other lines should update too).
> In the case of all datanodes sending FBR when the namenode restarts (large
> clusters), there is an issue with incomplete block reporting
> ---------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-17093
> URL: https://issues.apache.org/jira/browse/HDFS-17093
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 3.3.4
> Reporter: Yanlei Yu
> Priority: Minor
> Labels: pull-request-available
>
> In our cluster of 800+ nodes, after restarting the namenode, we found that
> some datanodes did not report enough blocks, causing the namenode to stay in
> secure mode for a long time after restarting because of incomplete block
> reporting
> I found in the logs of the datanode with incomplete block reporting that the
> first FBR attempt failed, possibly due to namenode stress, and then a second
> FBR attempt was made as follows:
> {code:java}
> ....
> 2023-07-17 11:29:28,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Unsuccessfully sent block report 0x6237a52c1e817e, containing 12 storage
> report(s), of which we sent 1. The reports had 1099057 total blocks and used
> 1 RPC(s). This took 294 msec to generate and 101721 msecs for RPC and NN
> processing. Got back no commands.
> 2023-07-17 11:37:04,014 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Successfully sent block report 0x62382416f3f055, containing 12 storage
> report(s), of which we sent 12. The reports had 1099048 total blocks and used
> 12 RPC(s). This took 295 msec to generate and 11647 msecs for RPC and NN
> processing. Got back no commands. {code}
> There's nothing wrong with that. Retry the send if it fails But on the
> namenode side of the logic:
> {code:java}
> if (namesystem.isInStartupSafeMode()
> && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
> && storageInfo.getBlockReportCount() > 0) {
> blockLog.info("BLOCK* processReport 0x{} with lease ID 0x{}: "
> + "discarded non-initial block report from {}"
> + " because namenode still in startup phase",
> strBlockReportId, fullBrLeaseId, nodeID);
> blockReportLeaseManager.removeLease(node);
> return !node.hasStaleStorages();
> } {code}
> When a disk was identified as the report is not the first time, namely
> storageInfo. GetBlockReportCount > 0, Will remove the ticket from the
> datanode, lead to a second report failed because no lease
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]