devmadhuu commented on code in PR #6500:
URL: https://github.com/apache/ozone/pull/6500#discussion_r1587207892


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerMetadataInspector.java:
##########
@@ -181,55 +182,61 @@ public String process(ContainerData containerData, 
DatanodeStore store,
       return null;
     }
 
-    JsonObject containerJson = inspectContainer(kvData, store);
-    boolean correct = checkAndRepair(containerJson, kvData, store);
-
-    Gson gson = new GsonBuilder()
-        .setPrettyPrinting()
-        .serializeNulls()
-        .create();
-    String jsonReport = gson.toJson(containerJson);
-    if (log != null) {
-      if (correct) {
-        log.trace(jsonReport);
-      } else {
-        log.error(jsonReport);
+    ObjectNode containerJson = inspectContainer(kvData, store);
+    boolean correct = checkAndRepair(containerJson, kvData,
+        store);
+
+    ObjectMapper objectMapper = new ObjectMapper();
+    objectMapper.enable(
+        SerializationFeature.INDENT_OUTPUT);
+    String jsonReport = null;
+    try {
+      jsonReport = objectMapper.writeValueAsString(containerJson);

Review Comment:
   Can we use `org.apache.hadoop.hdds.server.JsonUtils#toJsonString` here ?



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerMetadataInspector.java:
##########
@@ -373,33 +375,32 @@ private boolean checkAndRepair(JsonObject parent,
         return repaired;
       };
 
-      JsonObject blockCountError = buildErrorAndRepair("dBMetadata." +
+      ObjectNode blockCountError = buildErrorAndRepair("dBMetadata." +
               OzoneConsts.BLOCK_COUNT, blockCountAggregate, blockCountDB,
           keyRepairAction);
       errors.add(blockCountError);
     }
 
     // Check and repair used bytes.
-    JsonElement usedBytesDB = parent.getAsJsonObject("dBMetadata")
-        .get(OzoneConsts.CONTAINER_BYTES_USED);
-    JsonElement usedBytesAggregate = parent.getAsJsonObject("aggregates")
-        .get("usedBytes");
+    JsonNode usedBytesDB = parent.path("dBMetadata")
+        .path(OzoneConsts.CONTAINER_BYTES_USED);
+    JsonNode usedBytesAggregate = parent.path("aggregates").path("usedBytes");
 
     // If used bytes is absent from the DB, it is only an error if there is
     // a non-zero aggregate of used bytes among the block keys.
     long usedBytesDBLong = 0;
-    if (!usedBytesDB.isJsonNull()) {
-      usedBytesDBLong = usedBytesDB.getAsLong();
+    if (!usedBytesDB.isMissingNode()) {

Review Comment:
   `isNull `and i`sMissingNode` functions as same ? Because for `blockCountDB`, 
you used `isNull` and here for `usedBytesDB`, you have used "`isMissingNode`". 
You have replaced Old code which was using `isJsonNull`.



##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/shell/TestOzoneShellHA.java:
##########
@@ -412,11 +412,13 @@ private int getNumOfBuckets(String bucketPrefix)
    * Parse output into ArrayList with Gson.
    * @return ArrayList
    */
-  private ArrayList<LinkedTreeMap<String, String>> parseOutputIntoArrayList()
-      throws UnsupportedEncodingException {
-    return new Gson().fromJson(out.toString(DEFAULT_ENCODING), 
ArrayList.class);
+  private ArrayList<LinkedHashMap<String, Object>> parseOutputIntoArrayList()

Review Comment:
   Is it necessary to use a specific "`ArrayList`" `LinkedHashMap` reference ? 
can't we use just `List` and `HashMap` ?



##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/shell/TestOzoneShellHA.java:
##########
@@ -1696,7 +1699,7 @@ public void 
testListVolumeBucketKeyShouldPrintValidJsonArray()
     execute(ozoneShell, new String[] {"volume", "list"});
 
     // Expect valid JSON array
-    final ArrayList<LinkedTreeMap<String, String>> volumeListOut =
+    final ArrayList<LinkedHashMap<String, Object>> volumeListOut =

Review Comment:
   same as above comment. Pls check all places.



##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconWithOzoneManager.java:
##########
@@ -383,11 +383,11 @@ private long getReconTaskAttributeFromJson(String 
taskStatusResponse,
                                              String taskName,
                                              String entityAttribute)
       throws IOException {
-    List<HashMap<String, Object>> taskStatusList =
+    List<LinkedHashMap<String, Object>> taskStatusList =

Review Comment:
   Is it necessary to use a specific `LinkedHashMap` reference ? can't we use 
`HashMap` ?



##########
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerMetadataInspector.java:
##########
@@ -521,8 +512,11 @@ private JsonObject runInspectorAndGetReport(
     capturer.stopCapturing();
     String output = capturer.getOutput();
     capturer.clearOutput();
-
-    return new Gson().fromJson(output, JsonObject.class);
+    // Check if the output is effectively empty
+    if (output.trim().isEmpty()) {
+      return null;

Review Comment:
   Since we are returning null in this method, pls make sure no impacts on 
callers of this method like NPE or other impact.



##########
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/admin/scm/ResetDeletedBlockRetryCountSubcommand.java:
##########
@@ -92,9 +90,12 @@ public void execute(ScmClient client) throws IOException {
           System.out.println("The last loaded txID: " +
               txIDs.get(txIDs.size() - 1));
         }
-      } catch (JsonIOException | JsonSyntaxException | IOException ex) {
-        System.out.println("Cannot parse the file " + group.fileName);
+      } catch (JsonProcessingException ex) {
+        System.out.println("Error parsing JSON: " + ex.getMessage());
         throw new IOException(ex);
+      } catch (IOException ex) {

Review Comment:
   Can we combine these 2 exception same as old code ? Because 
`JsonProcessingException` is subclass of IOException and even if JSON parse 
exception will be thrown, then also you will get correct error info to log as 
we are wrapping in IOException and then throwing it again.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to