errose28 commented on code in PR #4972:
URL: https://github.com/apache/ozone/pull/4972#discussion_r1260546981


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java:
##########
@@ -160,6 +160,11 @@ void updateDataScanTimestamp(Instant timestamp)
    */
   void markContainerUnhealthy() throws StorageContainerException;
 
+  /**
+   * Marks the container replica as deleted.
+   */
+  void markContainerForDelete() throws StorageContainerException;

Review Comment:
   nit. The exception can be removed from the method signature here and in the 
implementation method.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java:
##########
@@ -232,4 +240,20 @@ public void verifyAndFixupContainerData(ContainerData 
containerData)
           ContainerProtos.Result.UNKNOWN_CONTAINER_TYPE);
     }
   }
+
+  private void cleanupDeletedContainer(

Review Comment:
   Can we use a method like this in both the startup cleanup code and the main 
container delete path? That will keep the same sequence of steps in one place.



##########
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java:
##########
@@ -182,70 +179,30 @@ public void testDeletedContainersClearedOnStartup() 
throws Exception {
     conf.setFromObject(new ReplicationConfig().setPort(0));
     try (EndpointStateMachine rpcEndPoint = createEndpoint(conf,
         serverAddress, 1000)) {
-      DatanodeDetails datanodeDetails = randomDatanodeDetails();
-      OzoneContainer ozoneContainer = new OzoneContainer(
-          datanodeDetails, conf, getContext(datanodeDetails));
+      OzoneContainer ozoneContainer = createVolume(conf);
+      HddsVolume hddsVolume = (HddsVolume) ozoneContainer.getVolumeSet()
+          .getVolumesList().get(0);
+      KeyValueContainer kvContainer = addContainer(conf, hddsVolume);
+      // For testing, we are moving the container under the tmp directory,
+      // in order to delete during datanode startup or shutdown
+      KeyValueContainerUtil.moveToDeletedContainerDir(
+          kvContainer.getContainerData(), hddsVolume);

Review Comment:
   nit. Might be good to check that the deleted containers directory has 
content here so we know this call succeeded and the startup resulted in the 
directory being cleared.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java:
##########
@@ -250,52 +241,6 @@ public void cleanDeletedContainerDir() {
       // so it won't be loaded in the container set
       // but there will be orphaned entries in the volume's RocksDB.

Review Comment:
   We can update this comment since we are not deleting RocksDB entries in this 
method anymore.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java:
##########
@@ -149,6 +149,31 @@ public static void removeContainer(KeyValueContainerData 
containerData,
         .getMetadataPath());
     File chunksPath = new File(containerData.getChunksPath());
 
+    if (!containerData.hasSchema(OzoneConsts.SCHEMA_V3)) {
+      // Close the DB connection and remove the DB handler from cache
+      BlockUtils.removeDB(containerData, conf);
+    }
+
+    // Delete the Container MetaData path.
+    FileUtils.deleteDirectory(containerMetaDataPath);
+
+    //Delete the Container Chunks Path.
+    FileUtils.deleteDirectory(chunksPath);
+
+    //Delete Container directory
+    FileUtils.deleteDirectory(containerMetaDataPath.getParentFile());

Review Comment:
   I think we should update `KeyValueContainerUtil#moveToDeletedContainerDir` 
to not bother updating the paths of the in-memory container data. Then instead 
of deleting the directories individually we can simplify this method to just 
remove the DB cache handles and delete the container from the tmp dir with one 
call to `FileUtils#deleteDirectory`. This will reduce the size of this method 
and it could be absorbed into `KeyValueContainer#delete` (its only caller) 
since we already have a lot of delete related methods floating around different 
classes.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -1350,6 +1350,9 @@ private void deleteInternal(Container container, boolean 
force)
 
         // Rename container location
         try {
+          container.markContainerForDelete();

Review Comment:
   We should remove the container from the containerSet after this line, and 
move both those lines just before the try block. Currently the catch block may 
throw an exception if DB update or container move fails, which would leave a 
DELETED container in the container set.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java:
##########
@@ -149,6 +149,31 @@ public static void removeContainer(KeyValueContainerData 
containerData,
         .getMetadataPath());
     File chunksPath = new File(containerData.getChunksPath());
 
+    if (!containerData.hasSchema(OzoneConsts.SCHEMA_V3)) {

Review Comment:
   Not related to this line specifically, but we should update the comment 
block on `KeyValueContainerUtil#moveToDeletedDir` to reflect the current delete 
steps.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to