[
https://issues.apache.org/jira/browse/HDFS-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17585263#comment-17585263
]
ASF GitHub Bot commented on HDFS-16735:
---------------------------------------
zhangshuyan0 commented on PR #4780:
URL: https://github.com/apache/hadoop/pull/4780#issuecomment-1228259955
> LGTM.
>
> Should we add `dfs.namenode.remove.bad.batch.num` to the xml configuration
file? Would it be better to add some unit tests?
Thanks @slfan1989 , I have added the relevant configuration to the xml file.
This patch is mainly to improve the processing performance of HeartbeatManager
in large-scale clusters, and has no impact on existing functions, so I did not
add a ut.
> Reduce the number of HeartbeatManager loops
> -------------------------------------------
>
> Key: HDFS-16735
> URL: https://issues.apache.org/jira/browse/HDFS-16735
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
>
> HeartbeatManager only processes one dead datanode (and failed storage) per
> round in heartbeatCheck(), that is to say, if there are ten failed storages,
> all datanode states need to be scanned 10 times, which is unnecessary and a
> waste of resources. This patch makes the number of bad storages processed per
> scan configurable.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]