ayushtkn commented on a change in pull request #4057:
URL: https://github.com/apache/hadoop/pull/4057#discussion_r830838309
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java
##########
@@ -136,6 +137,48 @@ public void testCheckBlockReportLease() throws Exception {
}
}
+ @Test
+ public void testCheckBlockReportLeaseWhenDnUnregister() throws Exception {
+ HdfsConfiguration conf = new HdfsConfiguration();
+ Random rand = new Random();
+
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+ .numDataNodes(1).build()) {
+ FSNamesystem fsn = cluster.getNamesystem();
+ BlockManager blockManager = fsn.getBlockManager();
+ String poolId = cluster.getNamesystem().getBlockPoolId();
+ NamenodeProtocols rpcServer = cluster.getNameNodeRpc();
+
+ // Remove the unique datanode to simulate the unregistered situation.
+ DataNode dn = cluster.getDataNodes().get(0);
+
blockManager.getDatanodeManager().getDatanodeMap().remove(dn.getDatanodeUuid());
+
+ // Trigger BlockReport.
+ DatanodeRegistration dnRegistration = dn.getDNRegistrationForBP(poolId);
+ StorageReport[] storages = dn.getFSDataset().getStorageReports(poolId);
+ ExecutorService pool = Executors.newFixedThreadPool(1);
+ BlockReportContext brContext = new BlockReportContext(1, 0,
+ rand.nextLong(), 1);
+ Future<DatanodeCommand> sendBRfuturea = pool.submit(() -> {
Review comment:
the variable name is very confusing, I couldn't understand what does a
in the end means.
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java
##########
@@ -136,6 +137,48 @@ public void testCheckBlockReportLease() throws Exception {
}
}
+ @Test
+ public void testCheckBlockReportLeaseWhenDnUnregister() throws Exception {
+ HdfsConfiguration conf = new HdfsConfiguration();
+ Random rand = new Random();
+
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+ .numDataNodes(1).build()) {
Review comment:
by default number of datanodes is 1
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##########
@@ -2751,6 +2751,11 @@ public boolean checkBlockReportLease(BlockReportContext
context,
return true;
}
DatanodeDescriptor node = datanodeManager.getDatanode(nodeID);
+ if (node == null) {
+ final UnregisteredNodeException e = new
UnregisteredNodeException(nodeID, null);
+ NameNode.stateChangeLog.error("BLOCK* NameSystem.getDatanode: " +
e.getLocalizedMessage());
+ throw e;
Review comment:
Can you share me the log message and the exception trace post this. We
have passed null here in the exception. I feel like it can lead to something
like:
``Node null is expected to serve this storage``
Which doesn't make sense to me. May be some more appropriate message should
be there.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]