[ 
https://issues.apache.org/jira/browse/HDFS-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17585254#comment-17585254
 ] 

ASF GitHub Bot commented on HDFS-16735:
---------------------------------------

zhangshuyan0 commented on code in PR #4780:
URL: https://github.com/apache/hadoop/pull/4780#discussion_r955836358


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java:
##########
@@ -96,6 +97,9 @@ class HeartbeatManager implements DatanodeStatistics {
     enableLogStaleNodes = conf.getBoolean(
         DFSConfigKeys.DFS_NAMENODE_ENABLE_LOG_STALE_DATANODE_KEY,
         DFSConfigKeys.DFS_NAMENODE_ENABLE_LOG_STALE_DATANODE_DEFAULT);
+    this.removeBatchNum =

Review Comment:
   Thanks, I've fixed these code style issues.





> Reduce the number of HeartbeatManager loops
> -------------------------------------------
>
>                 Key: HDFS-16735
>                 URL: https://issues.apache.org/jira/browse/HDFS-16735
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Shuyan Zhang
>            Assignee: Shuyan Zhang
>            Priority: Major
>              Labels: pull-request-available
>
> HeartbeatManager only processes one dead datanode (and failed storage) per 
> round in heartbeatCheck(), that is to say, if there are ten failed storages, 
> all datanode states need to be scanned 10 times, which is unnecessary and a 
> waste of resources. This patch makes the number of bad storages processed per 
> scan configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to