Tejaskriya commented on code in PR #6369:
URL: https://github.com/apache/ozone/pull/6369#discussion_r1524716633


##########
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionStatusSubCommand.java:
##########
@@ -100,6 +110,22 @@ public void execute(ScmClient scmClient) throws 
IOException {
       numDecomNodes = (totalDecom == null ? -1 : 
Integer.parseInt(totalDecom.toString()));
     }
 
+    if (json) {
+      List<Map<String, Object>> decommissioningNodesDetails = new 
ArrayList<>();
+
+      for (HddsProtos.Node node : decommissioningNodes) {
+        DatanodeDetails datanode = DatanodeDetails.getFromProtoBuf(
+            node.getNodeID());
+        Map<String, Object> datanodeMap = new LinkedHashMap<>();
+        datanodeMap.put("datanodeDetails", getDatanodeDetails(datanode));

Review Comment:
   We don't require this method `getDatanodeDetails()` to create a map. 
`datanode` can directly be put in the map. Although this will lead to a more 
verbose output, json option was also meant to be for verbose output. So it is 
better.
   ```suggestion
           datanodeMap.put("datanodeDetails", datanode);
   ```



##########
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionStatusSubCommand.java:
##########
@@ -100,6 +110,22 @@ public void execute(ScmClient scmClient) throws 
IOException {
       numDecomNodes = (totalDecom == null ? -1 : 
Integer.parseInt(totalDecom.toString()));
     }
 
+    if (json) {
+      List<Map<String, Object>> decommissioningNodesDetails = new 
ArrayList<>();
+
+      for (HddsProtos.Node node : decommissioningNodes) {
+        DatanodeDetails datanode = DatanodeDetails.getFromProtoBuf(
+            node.getNodeID());
+        Map<String, Object> datanodeMap = new LinkedHashMap<>();
+        datanodeMap.put("datanodeDetails", getDatanodeDetails(datanode));
+        datanodeMap.put("metrics", getCounts(datanode, jsonNode, 
numDecomNodes));
+        datanodeMap.put("containers", getContainers(scmClient, datanode));

Review Comment:
   Similarly, `getContainers()` method is also not required as the 
`scmClient.getContainersOnDecomNode(datanode)` API already returns a map.
   ```suggestion
           datanodeMap.put("containers", 
scmClient.getContainersOnDecomNode(datanode));
   ```
   



##########
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionStatusSubCommand.java:
##########
@@ -134,9 +179,62 @@ private void printCounts(DatanodeDetails datanode, 
JsonNode counts, int numDecom
           return;
         }
       }
-      System.err.println("Error getting pipeline and container counts for " + 
datanode.getHostName());
-    } catch (NullPointerException ex) {
-      System.err.println("Error getting pipeline and container counts for " + 
datanode.getHostName());
+      System.err.println(errMsg);
+    } catch (IOException e) {
+      System.err.println(errMsg);
     }
   }
+
+  private Map<String, Object> getDatanodeDetails(DatanodeDetails datanode) {
+    Map<String, Object> detailsMap = new LinkedHashMap<>();
+    detailsMap.put("uuid", datanode.getUuid().toString());
+    detailsMap.put("networkLocation", datanode.getNetworkLocation());
+    detailsMap.put("ipAddress", datanode.getIpAddress());
+    detailsMap.put("hostname", datanode.getHostName());
+    return detailsMap;
+  }
+
+  private Map<String, Object> getCounts(DatanodeDetails datanode, JsonNode 
counts, int numDecomNodes) {
+    Map<String, Object> countsMap = new LinkedHashMap<>();
+    String errMsg = getErrorMessage() + datanode.getHostName();
+    try {
+      for (int i = 1; i <= numDecomNodes; i++) {
+        if (datanode.getHostName().equals(counts.get("tag.datanode." + 
i).asText())) {
+          JsonNode pipelinesDN = counts.get("PipelinesWaitingToCloseDN." + i);
+          JsonNode underReplicatedDN = counts.get("UnderReplicatedDN." + i);
+          JsonNode unclosedDN = counts.get("UnclosedContainersDN." + i);
+          JsonNode startTimeDN = counts.get("StartTimeDN." + i);
+          if (pipelinesDN == null || underReplicatedDN == null || unclosedDN 
== null || startTimeDN == null) {
+            throw new IOException(errMsg);
+          }
+
+          int pipelines = Integer.parseInt(pipelinesDN.toString());
+          double underReplicated = 
Double.parseDouble(underReplicatedDN.toString());
+          double unclosed = Double.parseDouble(unclosedDN.toString());
+          long startTime = Long.parseLong(startTimeDN.toString());
+          Date date = new Date(startTime);
+          DateFormat formatter = new SimpleDateFormat("dd/MM/yyyy hh:mm:ss z");
+          countsMap.put("decommissionStartTime", formatter.format(date));
+          countsMap.put("numOfUnclosedPipelines", pipelines);
+          countsMap.put("numOfUnderReplicatedContainers", underReplicated);
+          countsMap.put("numOfUnclosedContainers", unclosed);
+          return countsMap;
+        }
+      }
+      System.err.println(errMsg);
+    } catch (IOException e) {
+      System.err.println(errMsg);
+    }
+    return countsMap;
+  }
+
+  private Map<String, Object> getContainers(ScmClient scmClient, 
DatanodeDetails datanode) throws IOException {

Review Comment:
   Following previous comments, this method can also be removed.



##########
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionStatusSubCommand.java:
##########
@@ -134,9 +179,62 @@ private void printCounts(DatanodeDetails datanode, 
JsonNode counts, int numDecom
           return;
         }
       }
-      System.err.println("Error getting pipeline and container counts for " + 
datanode.getHostName());
-    } catch (NullPointerException ex) {
-      System.err.println("Error getting pipeline and container counts for " + 
datanode.getHostName());
+      System.err.println(errMsg);
+    } catch (IOException e) {
+      System.err.println(errMsg);
     }
   }
+
+  private Map<String, Object> getDatanodeDetails(DatanodeDetails datanode) {

Review Comment:
   Following previous comments, this method can be removed.



##########
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionStatusSubCommand.java:
##########
@@ -134,9 +179,62 @@ private void printCounts(DatanodeDetails datanode, 
JsonNode counts, int numDecom
           return;
         }
       }
-      System.err.println("Error getting pipeline and container counts for " + 
datanode.getHostName());
-    } catch (NullPointerException ex) {
-      System.err.println("Error getting pipeline and container counts for " + 
datanode.getHostName());
+      System.err.println(errMsg);
+    } catch (IOException e) {
+      System.err.println(errMsg);
     }
   }
+
+  private Map<String, Object> getDatanodeDetails(DatanodeDetails datanode) {
+    Map<String, Object> detailsMap = new LinkedHashMap<>();
+    detailsMap.put("uuid", datanode.getUuid().toString());
+    detailsMap.put("networkLocation", datanode.getNetworkLocation());
+    detailsMap.put("ipAddress", datanode.getIpAddress());
+    detailsMap.put("hostname", datanode.getHostName());
+    return detailsMap;
+  }
+
+  private Map<String, Object> getCounts(DatanodeDetails datanode, JsonNode 
counts, int numDecomNodes) {

Review Comment:
   There is quite a bit of code duplication between `printCounts` and 
`getCounts`. To avoid that, we can use this `getCounts` method in `printCounts` 
and then display it appropriately. Something like this:
   ```
     private void printCounts(DatanodeDetails datanode, JsonNode counts, int 
numDecomNodes) {
       Map<String, Object> countsMap = getCounts(datanode, counts, 
numDecomNodes);
       System.out.println("Decommission Started At : " + 
countsMap.get("decommissionStartTime"));
       System.out.println("No. of Unclosed Pipelines: " + 
countsMap.get("numOfUnclosedPipelines"));
       System.out.println("No. of UnderReplicated Containers: " + 
countsMap.get("numOfUnderReplicatedContainers"));
       System.out.println("No. of Unclosed Containers: " + 
countsMap.get("numOfUnclosedContainers"));
     }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to