Hexiaoqiao commented on code in PR #4353:
URL: https://github.com/apache/hadoop/pull/4353#discussion_r960344532


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -2350,6 +2350,9 @@ private void invalidate(String bpid, Block[] invalidBlks, 
boolean async)
         removing = volumeMap.remove(bpid, invalidBlks[i]);
         addDeletingBlock(bpid, removing.getBlockId());
         LOG.debug("Block file {} is to be deleted", removing.getBlockURI());
+        if (datanode.getMetrics() != null) {

Review Comment:
   Ah, I believe that this will make `BlocksRemoved` more precisions. But I do 
not think it will be exact, consider that `removing == null` which means that 
DataNode do not manage this pending delete replica, so we should not increase 
the metric. Maybe there are some other cases, not thought deeply. I just 
suggest we should consider if removing will be null.
   Another point, line 2352 will throw NPE but not decide if removing will null 
or not. This is another issue, not related to this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to