[ 
https://issues.apache.org/jira/browse/HDFS-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17748370#comment-17748370
 ] 

ASF GitHub Bot commented on HDFS-17093:
---------------------------------------

zhangshuyan0 commented on code in PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#discussion_r1277008543


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java:
##########
@@ -269,4 +272,88 @@ private StorageBlockReport[] 
createReports(DatanodeStorage[] dnStorages,
     }
     return storageBlockReports;
   }
+
+  @Test
+  public void testFirstIncompleteBlockReport() throws Exception {
+    HdfsConfiguration conf = new HdfsConfiguration();
+    Random rand = new Random();
+
+    try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+            .numDataNodes(1).build()) {
+      cluster.waitActive();
+
+      FSNamesystem fsn = cluster.getNamesystem();
+      
+      NameNode nameNode = cluster.getNameNode();
+      // pretend to be in safemode
+      NameNodeAdapter.enterSafeMode(nameNode, false);
+
+      BlockManager blockManager = fsn.getBlockManager();
+      BlockManager spyBlockManager = spy(blockManager);
+      fsn.setBlockManagerForTesting(spyBlockManager);
+      String poolId = cluster.getNamesystem().getBlockPoolId();
+
+      NamenodeProtocols rpcServer = cluster.getNameNodeRpc();
+
+      // Test based on one DataNode report to Namenode
+      DataNode dn = cluster.getDataNodes().get(0);
+      DatanodeDescriptor datanodeDescriptor = spyBlockManager
+              .getDatanodeManager().getDatanode(dn.getDatanodeId());
+
+      DatanodeRegistration dnRegistration = dn.getDNRegistrationForBP(poolId);
+      StorageReport[] storages = dn.getFSDataset().getStorageReports(poolId);
+
+      // Send heartbeat and request full block report lease
+      HeartbeatResponse hbResponse = rpcServer.sendHeartbeat(
+              dnRegistration, storages, 0, 0, 0, 0, 0, null, true,
+              SlowPeerReports.EMPTY_REPORT, SlowDiskReports.EMPTY_REPORT);
+
+      DelayAnswer delayer = new DelayAnswer(BlockManager.LOG);
+      doAnswer(delayer).when(spyBlockManager).processReport(
+              any(DatanodeStorageInfo.class),
+              any(BlockListAsLongs.class));
+
+      ExecutorService pool = Executors.newFixedThreadPool(1);
+
+      // Trigger sendBlockReport
+      BlockReportContext brContext = new BlockReportContext(1, 0,
+              rand.nextLong(), hbResponse.getFullBlockReportLeaseId());
+      // Build every storage with 100 blocks for sending report
+      DatanodeStorage[] datanodeStorages
+              = new DatanodeStorage[storages.length];
+      for (int i = 0; i < storages.length; i++) {
+        datanodeStorages[i] = storages[i].getStorage();
+        StorageBlockReport[] reports = createReports(datanodeStorages, 100);
+
+        // The first multiple send once, simulating the failure of the first 
report, only send successfully once
+        if(i == 0){
+          rpcServer.blockReport(dnRegistration, poolId, reports, brContext);
+        }
+
+        // Send blockReport
+        DatanodeCommand datanodeCommand = 
rpcServer.blockReport(dnRegistration, poolId, reports,
+                brContext);
+
+        // Wait until BlockManager calls processReport
+        delayer.waitForCall();
+
+        // Remove full block report lease about dn
+        spyBlockManager.getBlockReportLeaseManager()
+                .removeLease(datanodeDescriptor);

Review Comment:
   The problem in this UT is the same as before, you still actively call 
`removeLease` in the code, which doesn't seem to happen in real code. It 
remains confusing why `removeLease` is called when the first block report was 
not successfully processed.





> In the case of all datanodes sending FBR when the namenode restarts (large 
> clusters), there is an issue with incomplete block reporting
> ---------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-17093
>                 URL: https://issues.apache.org/jira/browse/HDFS-17093
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 3.3.4
>            Reporter: Yanlei Yu
>            Priority: Minor
>              Labels: pull-request-available
>
> In our cluster of 800+ nodes, after restarting the namenode, we found that 
> some datanodes did not report enough blocks, causing the namenode to stay in 
> secure mode for a long time after restarting because of incomplete block 
> reporting
> I found in the logs of the datanode with incomplete block reporting that the 
> first FBR attempt failed, possibly due to namenode stress, and then a second 
> FBR attempt was made as follows:
> {code:java}
> ....
> 2023-07-17 11:29:28,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Unsuccessfully sent block report 0x6237a52c1e817e,  containing 12 storage 
> report(s), of which we sent 1. The reports had 1099057 total blocks and used 
> 1 RPC(s). This took 294 msec to generate and 101721 msecs for RPC and NN 
> processing. Got back no commands.
> 2023-07-17 11:37:04,014 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Successfully sent block report 0x62382416f3f055,  containing 12 storage 
> report(s), of which we sent 12. The reports had 1099048 total blocks and used 
> 12 RPC(s). This took 295 msec to generate and 11647 msecs for RPC and NN 
> processing. Got back no commands. {code}
> There's nothing wrong with that. Retry the send if it fails But on the 
> namenode side of the logic:
> {code:java}
> if (namesystem.isInStartupSafeMode()
>     && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
>     && storageInfo.getBlockReportCount() > 0) {
>   blockLog.info("BLOCK* processReport 0x{} with lease ID 0x{}: "
>       + "discarded non-initial block report from {}"
>       + " because namenode still in startup phase",
>       strBlockReportId, fullBrLeaseId, nodeID);
>   blockReportLeaseManager.removeLease(node);
>   return !node.hasStaleStorages();
> } {code}
> When a disk was identified as the report is not the first time, namely 
> storageInfo. GetBlockReportCount > 0, Will remove the ticket from the 
> datanode, lead to a second report failed because no lease



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to