ZanderXu commented on code in PR #6176:
URL: https://github.com/apache/hadoop/pull/6176#discussion_r1361434914


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java:
##########
@@ -1007,6 +1013,7 @@ public void updateRegInfo(DatanodeID nodeReg) {
     for(DatanodeStorageInfo storage : getStorageInfos()) {
       if (storage.getStorageType() != StorageType.PROVIDED) {
         storage.setBlockReportCount(0);
+        storage.setBlockContentsStale(true);

Review Comment:
   @zhangshuyan0 Thanks for your reply.  This case has nothing to do with 
excess replicas. The mainly problem is the time between "registerDataNode" and 
"blockReport". 
   
   During in this time, namenode thinks that this DN contains this block, but 
actually the DN doesn't store this block(let's not discuss how this block was 
lost in this DN first). So during this time, namenode shouldn't delete any 
replicas for this block, right? NameNode can only delete replicas of this block 
after "blockReport", right?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to