[
https://issues.apache.org/jira/browse/HDFS-16964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17704588#comment-17704588
]
ASF GitHub Bot commented on HDFS-16964:
---------------------------------------
Hexiaoqiao commented on code in PR #5510:
URL: https://github.com/apache/hadoop/pull/5510#discussion_r1147515572
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -4020,12 +4014,22 @@ public void setReplication(
}
}
+ private void processExtraRedundancyBlock(final BlockInfo block,
Review Comment:
Suggest to add some java doc for this new method.
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -4035,17 +4039,17 @@ private void processExtraRedundancyBlock(final
BlockInfo block,
Collection<DatanodeStorageInfo> nonExcess = new ArrayList<>();
Collection<DatanodeDescriptor> corruptNodes = corruptReplicas
.getNodes(block);
+ boolean hasStaleStorage = false;
+ DatanodeStorageInfo staleStorage = null;
Review Comment:
this should be one set about `DatanodeStorageInfo` because it could be more
than one stale storage here.
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -4020,12 +4014,22 @@ public void setReplication(
}
}
+ private void processExtraRedundancyBlock(final BlockInfo block,
+ final short replication, final DatanodeDescriptor addedNode,
+ DatanodeDescriptor delNodeHint) {
+ if (!processExtraRedundancyBlockWithoutPostpone(block, replication,
+ addedNode, delNodeHint)) {
+ postponeBlock(block);
+ }
+ }
+
/**
* Find how many of the containing nodes are "extra", if any.
* If there are any extras, call chooseExcessRedundancies() to
* mark them in the excessRedundancyMap.
+ * @return if all redundancy replicas are removed
Review Comment:
`@return if all redundancy replicas are removed`
->
`@return true if all redundancy replicas are removed.`
> Improve processing of excess redundancy after failover
> ------------------------------------------------------
>
> Key: HDFS-16964
> URL: https://issues.apache.org/jira/browse/HDFS-16964
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
>
> After failover, the block with excess redundancy cannot be processed until
> all replicas are not stale, because the stale ones may have been deleted.
> That is to say, we need to wait for the FBRs of all datanodes on which the
> block resides before deleting the redundant replicas. This is unnecessary, we
> can bypass stale replicas when dealing with excess replicas, and delete
> non-stale excess replicas in a more timely manner.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]