sumitagrawl commented on code in PR #6360:
URL: https://github.com/apache/ozone/pull/6360#discussion_r1552932874


##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconNodeManager.java:
##########
@@ -321,4 +321,27 @@ public long getNodeDBKeyCount() throws IOException {
       return nodeCount;
     }
   }
+
+  /**
+   * Remove an existing node from the NodeDB. Explicit removal from admin user.
+   * First this API call removes the node info from NodeManager memory and
+   * if successful, then remove the node finally from NODES table as well.
+   *
+   * @param datanodeDetails Datanode details.
+   */
+  @Override
+  public void removeNode(DatanodeDetails datanodeDetails) throws 
NodeNotFoundException, IOException {
+    try {

Review Comment:
   IMO, we should remove below as per deadnode handler at recon,
   1. remove container replica from containers for the dn (can verify if 
container in recon also have replica info)
   2. Do any recon container meta info need update when DN is removed? Not sure 
if any relation to,
   
   `containerSizeCountTask.process(containerManager.getContainers());`



##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java:
##########
@@ -171,4 +181,163 @@ private DatanodeStorageReport 
getStorageReport(DatanodeDetails datanode) {
     long committed = nodeStat.getCommitted().get();
     return new DatanodeStorageReport(capacity, used, remaining, committed);
   }
+
+  /**
+   * Removes datanodes from Recon's memory and nodes table in Recon DB.
+   * @param uuids the list of datanode uuid's
+   *
+   * @return JSON response with failed, not found and successfully removed 
datanodes list.
+   */
+  @PUT
+  @Path("/remove")
+  @Consumes(MediaType.APPLICATION_JSON)
+  public Response removeDatanodes(List<String> uuids) {
+    List<DatanodeMetadata> failedDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> notFoundDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> removedDatanodes = new ArrayList<>();
+
+    Preconditions.checkNotNull(uuids, "Datanode list argument should not be 
null");
+    Preconditions.checkArgument(!uuids.isEmpty(), "Datanode list argument 
should not be empty");
+    try {
+      for (String uuid : uuids) {
+        DatanodeDetails nodeByUuid = nodeManager.getNodeByUuid(uuid);
+        try {
+          if (preChecksSuccess(nodeByUuid)) {
+            removedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            nodeManager.removeNode(nodeByUuid);
+          } else {
+            Response.ResponseBuilder builder = 
Response.status(Response.Status.BAD_REQUEST);
+            builder.entity("{\n" +
+                "    \"Invalid request: Pre-checks failed for selected 
datanodes. DataNode should pass following " +
+                "pre-checks.\": [\n" +
+                "        {\n" +
+                "            \"title\": \"Incorrect State\",\n" +
+                "            \"description\": \"DataNode should be in either 
DECOMMISSIONED operational state or " +
+                "DEAD node state.\"\n" +
+                "        },\n" +
+                "        {\n" +
+                "            \"title\": \"Open Containers\",\n" +
+                "            \"description\": \"Containers are open for few or 
all selected datanodes.\"\n" +
+                "        },\n" +
+                "        {\n" +
+                "            \"title\": \"Open Pipelines\",\n" +
+                "            \"description\": \"Pipelines are open for few or 
all selected datanodes.\"\n" +
+                "        }\n" +
+                "    ]\n" +
+                "}");
+            failedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withOperationalState(nodeByUuid.getPersistedOpState())
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            // Build and return the response
+            return builder.build();
+          }
+        } catch (NodeNotFoundException nnfe) {
+          LOG.error("Selected node {} not found : {} ", uuid, nnfe);
+          notFoundDatanodes.add(DatanodeMetadata.newBuilder()
+                  .withHostname("")
+                  .withState(NodeState.DEAD)
+              .withUUid(uuid).build());
+        }
+      }
+    } catch (Exception exp) {
+      LOG.error("Unexpected Error while removing datanodes : {} ", exp);
+      throw new WebApplicationException(exp, 
Response.Status.INTERNAL_SERVER_ERROR);
+    }
+
+    RemoveDataNodesResponseWrapper removeDataNodesResponseWrapper = new 
RemoveDataNodesResponseWrapper();
+
+    if (!failedDatanodes.isEmpty()) {
+      DatanodesResponse failedNodesResp =
+          new DatanodesResponse(failedDatanodes.size(), failedDatanodes);
+      failedNodesResp.setMessage("Invalid request: Nodes should be in either 
DECOMMISSIONED or " +

Review Comment:
   every failed node can have different reason of failure



##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java:
##########
@@ -171,4 +177,59 @@ private DatanodeStorageReport 
getStorageReport(DatanodeDetails datanode) {
     long committed = nodeStat.getCommitted().get();
     return new DatanodeStorageReport(capacity, used, remaining, committed);
   }
+
+  @PUT
+  @Path("/remove")
+  @Consumes(MediaType.APPLICATION_JSON)
+  public Response removeDatanodes(List<String> uuids) {
+    List<DatanodeMetadata> notFoundDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> removedDatanodes = new ArrayList<>();
+
+    Preconditions.checkNotNull(uuids, "Datanode list argument should not be 
null");
+    Preconditions.checkArgument(!uuids.isEmpty(), "Datanode list argument 
should not be empty");
+    try {
+      for (String uuid : uuids) {
+        DatanodeDetails nodeByUuid = nodeManager.getNodeByUuid(uuid);
+        try {
+          NodeStatus nodeStatus = nodeManager.getNodeStatus(nodeByUuid);
+          boolean isNodeDecommissioned = nodeByUuid.getPersistedOpState() == 
NodeOperationalState.DECOMMISSIONED;
+          boolean isNodeInMaintenance = nodeByUuid.getPersistedOpState() == 
NodeOperationalState.DECOMMISSIONED;
+          if (isNodeDecommissioned || isNodeInMaintenance || 
nodeStatus.isDead()) {
+            removedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            nodeManager.removeNode(nodeByUuid);
+          } else {
+            Response.ResponseBuilder builder = 
Response.status(Response.Status.BAD_REQUEST);
+            builder.entity("Invalid request: Node: " + uuid + " should be in 
either DECOMMISSIONED or " +
+                "IN_MAINTENANCE mode or DEAD.");
+            // Build and return the response
+            return builder.build();

Review Comment:
   Still not fixed as return error from this point only. Need remove builder 
and return.



##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java:
##########
@@ -171,4 +181,163 @@ private DatanodeStorageReport 
getStorageReport(DatanodeDetails datanode) {
     long committed = nodeStat.getCommitted().get();
     return new DatanodeStorageReport(capacity, used, remaining, committed);
   }
+
+  /**
+   * Removes datanodes from Recon's memory and nodes table in Recon DB.
+   * @param uuids the list of datanode uuid's
+   *
+   * @return JSON response with failed, not found and successfully removed 
datanodes list.
+   */
+  @PUT
+  @Path("/remove")
+  @Consumes(MediaType.APPLICATION_JSON)
+  public Response removeDatanodes(List<String> uuids) {
+    List<DatanodeMetadata> failedDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> notFoundDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> removedDatanodes = new ArrayList<>();
+
+    Preconditions.checkNotNull(uuids, "Datanode list argument should not be 
null");
+    Preconditions.checkArgument(!uuids.isEmpty(), "Datanode list argument 
should not be empty");
+    try {
+      for (String uuid : uuids) {
+        DatanodeDetails nodeByUuid = nodeManager.getNodeByUuid(uuid);
+        try {
+          if (preChecksSuccess(nodeByUuid)) {
+            removedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            nodeManager.removeNode(nodeByUuid);
+          } else {
+            Response.ResponseBuilder builder = 
Response.status(Response.Status.BAD_REQUEST);
+            builder.entity("{\n" +
+                "    \"Invalid request: Pre-checks failed for selected 
datanodes. DataNode should pass following " +
+                "pre-checks.\": [\n" +
+                "        {\n" +
+                "            \"title\": \"Incorrect State\",\n" +
+                "            \"description\": \"DataNode should be in either 
DECOMMISSIONED operational state or " +
+                "DEAD node state.\"\n" +
+                "        },\n" +
+                "        {\n" +
+                "            \"title\": \"Open Containers\",\n" +
+                "            \"description\": \"Containers are open for few or 
all selected datanodes.\"\n" +
+                "        },\n" +
+                "        {\n" +
+                "            \"title\": \"Open Pipelines\",\n" +
+                "            \"description\": \"Pipelines are open for few or 
all selected datanodes.\"\n" +
+                "        }\n" +
+                "    ]\n" +
+                "}");
+            failedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withOperationalState(nodeByUuid.getPersistedOpState())
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            // Build and return the response
+            return builder.build();
+          }
+        } catch (NodeNotFoundException nnfe) {
+          LOG.error("Selected node {} not found : {} ", uuid, nnfe);
+          notFoundDatanodes.add(DatanodeMetadata.newBuilder()
+                  .withHostname("")
+                  .withState(NodeState.DEAD)
+              .withUUid(uuid).build());
+        }
+      }
+    } catch (Exception exp) {
+      LOG.error("Unexpected Error while removing datanodes : {} ", exp);
+      throw new WebApplicationException(exp, 
Response.Status.INTERNAL_SERVER_ERROR);
+    }
+
+    RemoveDataNodesResponseWrapper removeDataNodesResponseWrapper = new 
RemoveDataNodesResponseWrapper();
+
+    if (!failedDatanodes.isEmpty()) {
+      DatanodesResponse failedNodesResp =
+          new DatanodesResponse(failedDatanodes.size(), failedDatanodes);
+      failedNodesResp.setMessage("Invalid request: Nodes should be in either 
DECOMMISSIONED or " +

Review Comment:
   description is not correct, it can have open pipeline and open container.



##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java:
##########
@@ -171,4 +181,163 @@ private DatanodeStorageReport 
getStorageReport(DatanodeDetails datanode) {
     long committed = nodeStat.getCommitted().get();
     return new DatanodeStorageReport(capacity, used, remaining, committed);
   }
+
+  /**
+   * Removes datanodes from Recon's memory and nodes table in Recon DB.
+   * @param uuids the list of datanode uuid's
+   *
+   * @return JSON response with failed, not found and successfully removed 
datanodes list.
+   */
+  @PUT
+  @Path("/remove")
+  @Consumes(MediaType.APPLICATION_JSON)
+  public Response removeDatanodes(List<String> uuids) {
+    List<DatanodeMetadata> failedDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> notFoundDatanodes = new ArrayList<>();
+    List<DatanodeMetadata> removedDatanodes = new ArrayList<>();
+
+    Preconditions.checkNotNull(uuids, "Datanode list argument should not be 
null");
+    Preconditions.checkArgument(!uuids.isEmpty(), "Datanode list argument 
should not be empty");
+    try {
+      for (String uuid : uuids) {
+        DatanodeDetails nodeByUuid = nodeManager.getNodeByUuid(uuid);
+        try {
+          if (preChecksSuccess(nodeByUuid)) {
+            removedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            nodeManager.removeNode(nodeByUuid);
+          } else {
+            Response.ResponseBuilder builder = 
Response.status(Response.Status.BAD_REQUEST);
+            builder.entity("{\n" +
+                "    \"Invalid request: Pre-checks failed for selected 
datanodes. DataNode should pass following " +
+                "pre-checks.\": [\n" +
+                "        {\n" +
+                "            \"title\": \"Incorrect State\",\n" +
+                "            \"description\": \"DataNode should be in either 
DECOMMISSIONED operational state or " +
+                "DEAD node state.\"\n" +
+                "        },\n" +
+                "        {\n" +
+                "            \"title\": \"Open Containers\",\n" +
+                "            \"description\": \"Containers are open for few or 
all selected datanodes.\"\n" +
+                "        },\n" +
+                "        {\n" +
+                "            \"title\": \"Open Pipelines\",\n" +
+                "            \"description\": \"Pipelines are open for few or 
all selected datanodes.\"\n" +
+                "        }\n" +
+                "    ]\n" +
+                "}");
+            failedDatanodes.add(DatanodeMetadata.newBuilder()
+                .withHostname(nodeManager.getHostName(nodeByUuid))
+                .withUUid(uuid)
+                .withOperationalState(nodeByUuid.getPersistedOpState())
+                .withState(nodeManager.getNodeStatus(nodeByUuid).getHealth())
+                .build());
+            // Build and return the response
+            return builder.build();
+          }
+        } catch (NodeNotFoundException nnfe) {
+          LOG.error("Selected node {} not found : {} ", uuid, nnfe);
+          notFoundDatanodes.add(DatanodeMetadata.newBuilder()
+                  .withHostname("")
+                  .withState(NodeState.DEAD)
+              .withUUid(uuid).build());
+        }
+      }
+    } catch (Exception exp) {
+      LOG.error("Unexpected Error while removing datanodes : {} ", exp);
+      throw new WebApplicationException(exp, 
Response.Status.INTERNAL_SERVER_ERROR);
+    }
+
+    RemoveDataNodesResponseWrapper removeDataNodesResponseWrapper = new 
RemoveDataNodesResponseWrapper();
+
+    if (!failedDatanodes.isEmpty()) {
+      DatanodesResponse failedNodesResp =
+          new DatanodesResponse(failedDatanodes.size(), failedDatanodes);
+      failedNodesResp.setMessage("Invalid request: Nodes should be in either 
DECOMMISSIONED or " +
+          "IN_MAINTENANCE mode or in DEAD State.");
+      
removeDataNodesResponseWrapper.getDatanodesResponseMap().put("failedDatanodes", 
failedNodesResp);
+    }
+
+    if (!notFoundDatanodes.isEmpty()) {
+      DatanodesResponse notFoundNodesResp =
+          new DatanodesResponse(notFoundDatanodes.size(), notFoundDatanodes);
+      notFoundNodesResp.setMessage("Invalid request: Selected nodes not found. 
Kindly send correct node " +
+          "details to remove it !!!");

Review Comment:
   we can remove message, as it provide redundant information, as 
notFoundNodeResp object itself tell same if list is not empty



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to