[
https://issues.apache.org/jira/browse/HDFS-16678?focusedWorklogId=793961&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-793961
]
ASF GitHub Bot logged work on HDFS-16678:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 21/Jul/22 22:37
Start Date: 21/Jul/22 22:37
Worklog Time Spent: 10m
Work Description: ZanderXu commented on code in PR #4606:
URL: https://github.com/apache/hadoop/pull/4606#discussion_r927154231
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java:
##########
@@ -544,28 +548,30 @@ public String getNodeUsage() {
final Map<String, Map<String, Object>> info = new HashMap<>();
try {
- RouterRpcServer rpcServer = this.router.getRpcServer();
- DatanodeInfo[] live = rpcServer.getDatanodeReport(
- DatanodeReportType.LIVE, false, timeOut);
-
- if (live.length > 0) {
- float totalDfsUsed = 0;
- float[] usages = new float[live.length];
- int i = 0;
- for (DatanodeInfo dn : live) {
- usages[i++] = dn.getDfsUsedPercent();
- totalDfsUsed += dn.getDfsUsedPercent();
- }
- totalDfsUsed /= live.length;
- Arrays.sort(usages);
- median = usages[usages.length / 2];
- max = usages[usages.length - 1];
- min = usages[0];
-
- for (i = 0; i < usages.length; i++) {
- dev += (usages[i] - totalDfsUsed) * (usages[i] - totalDfsUsed);
+ if (this.enableGetDNUsage) {
+ RouterRpcServer rpcServer = this.router.getRpcServer();
+ DatanodeInfo[] live = rpcServer.getDatanodeReport(
+ DatanodeReportType.LIVE, false, timeOut);
+
+ if (live.length > 0) {
+ float totalDfsUsed = 0;
+ float[] usages = new float[live.length];
+ int i = 0;
+ for (DatanodeInfo dn : live) {
+ usages[i++] = dn.getDfsUsedPercent();
Review Comment:
Yes, `rpcServer.getDatanodeReport()` is expensive. As the number of DNs or
downstream nameservices in the cluster increases, it will become more and more
expensive. such as 1w+ DNs, 5w+ DNs, 20+ NSs, 50+ NSs.
Issue Time Tracking
-------------------
Worklog Id: (was: 793961)
Time Spent: 50m (was: 40m)
> RBF supports disable getNodeUsage() in RBFMetrics
> -------------------------------------------------
>
> Key: HDFS-16678
> URL: https://issues.apache.org/jira/browse/HDFS-16678
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 50m
> Remaining Estimate: 0h
>
> In our prod environment, we try to collect RBF metrics every 15s through
> jmx_exporter. And we found that collection task often failed.
> After tracing and found that the collection task is blocked at getNodeUsage()
> in RBFMetrics, because it will collection all datanode's usage from
> downstream nameservices. This is a very expensive and almost useless
> operation. Because in most scenarios, each NameSerivce contains almost the
> same DNs. We can get the data usage's from any one nameservices, not from RBF.
> So I feel that RBF should supports disable getNodeUsage() in RBFMetrics.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]