[ 
https://issues.apache.org/jira/browse/HDFS-16678?focusedWorklogId=796166&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796166
 ]

ASF GitHub Bot logged work on HDFS-16678:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 28/Jul/22 18:52
            Start Date: 28/Jul/22 18:52
    Worklog Time Spent: 10m 
      Work Description: goiri commented on code in PR #4606:
URL: https://github.com/apache/hadoop/pull/4606#discussion_r932570164


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java:
##########
@@ -537,35 +547,34 @@ public int getNumEnteringMaintenanceDataNodes() {
 
   @Override // NameNodeMXBean
   public String getNodeUsage() {
-    float median = 0;
-    float max = 0;
-    float min = 0;
-    float dev = 0;
+    double median = 0;
+    double max = 0;
+    double min = 0;
+    double dev = 0;
 
     final Map<String, Map<String, Object>> info = new HashMap<>();
     try {
-      RouterRpcServer rpcServer = this.router.getRpcServer();
-      DatanodeInfo[] live = rpcServer.getDatanodeReport(
-          DatanodeReportType.LIVE, false, timeOut);
+      DatanodeInfo[] live = null;
+      if (this.enableGetDNUsage) {
+        RouterRpcServer rpcServer = this.router.getRpcServer();
+        live = rpcServer.getDatanodeReport(DatanodeReportType.LIVE, false, 
timeOut);
+      } else {
+        LOG.debug("Getting node usage is disabled.");
+      }
 
-      if (live.length > 0) {
-        float totalDfsUsed = 0;
-        float[] usages = new float[live.length];
+      if (live != null && live.length > 0) {
+        double[] usages = new double[live.length];
         int i = 0;
         for (DatanodeInfo dn : live) {
           usages[i++] = dn.getDfsUsedPercent();
-          totalDfsUsed += dn.getDfsUsedPercent();
         }
-        totalDfsUsed /= live.length;
         Arrays.sort(usages);
         median = usages[usages.length / 2];
         max = usages[usages.length - 1];
         min = usages[0];
 
-        for (i = 0; i < usages.length; i++) {
-          dev += (usages[i] - totalDfsUsed) * (usages[i] - totalDfsUsed);
-        }
-        dev = (float) Math.sqrt(dev / usages.length);
+        StandardDeviation deviation = new StandardDeviation();
+        dev = deviation.evaluate(usages);
       }
     } catch (IOException e) {
       LOG.error("Cannot get the live nodes: {}", e.getMessage());

Review Comment:
   > I feel it would be better this way.
   > 
   > ```
   > LOG.error("Cannot get the live nodes.", e).
   > ```
   
   Do we want to have the full stack trace? I think it is pretty clear what the 
error is here without it.





Issue Time Tracking
-------------------

    Worklog Id:     (was: 796166)
    Time Spent: 1h 50m  (was: 1h 40m)

> RBF supports disable getNodeUsage() in RBFMetrics
> -------------------------------------------------
>
>                 Key: HDFS-16678
>                 URL: https://issues.apache.org/jira/browse/HDFS-16678
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In our prod environment, we try to collect RBF metrics every 15s through 
> jmx_exporter. And we found that collection task often failed. 
> After tracing and found that the collection task is blocked at getNodeUsage() 
> in RBFMetrics, because it will collection all datanode's usage from 
> downstream nameservices.  This is a very expensive and almost useless 
> operation. Because in most scenarios, each NameSerivce contains almost the 
> same DNs. We can get the data usage's from any one nameservices, not from RBF.
> So I feel that RBF should supports disable getNodeUsage() in RBFMetrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to