haiyang1987 commented on code in PR #6176:
URL: https://github.com/apache/hadoop/pull/6176#discussion_r1356798872
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java:
##########
@@ -1007,6 +1013,7 @@ public void updateRegInfo(DatanodeID nodeReg) {
for(DatanodeStorageInfo storage : getStorageInfos()) {
if (storage.getStorageType() != StorageType.PROVIDED) {
storage.setBlockReportCount(0);
+ storage.setBlockContentsStale(true);
Review Comment:
Thanks @zhangshuyan0 for you comment.
The modifications here may not be directly related to the current problem.
the reason for modifying this is that if the current dn is re-registered,
block deletion or exception may occur on the dn during this period, because FBR
has not yet been completed, and the NN side memory record is different from the
actual dn block.
If processExtraRedundancyBlock is executed at this time, block loss may
occur.
such as a file has 2 replicas, but only 1 replica is expected. when
processExtraRedundancyBlock is executed, a live dn will be choose for deletion,
which will cause a block miss, so the dn re-registering needs to be marked
blockContentsStale will avoid this case.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]