errose28 commented on a change in pull request #1702:
URL: https://github.com/apache/ozone/pull/1702#discussion_r551391351



##########
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaTwoDBDefinition.java
##########
@@ -58,10 +61,26 @@
                   ChunkInfoList.class,
                   new ChunkInfoListCodec());
 
+  public static final DBColumnFamilyDefinition<Long, DeletedBlocksTransaction>
+      DELETE_TRANSACTION =
+      new DBColumnFamilyDefinition<>(
+          "txnTable",

Review comment:
       Looks like when I did HDDS-3869 I used snake case on the column family 
names (`"block_data"`), even though the rest of the tables on other components 
uses camel case. Can you change the name on the block data table to be camel 
case (`"blockData"`) as well to match? Also maybe change this column family's 
name to `deleteTxns` for clarity, and since the other column families don't 
have `"table"` in their name?

##########
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -312,9 +338,102 @@ public BackgroundTaskResult call() throws Exception {
         }
         crr.addAll(succeedBlocks);
         return crr;
-      } finally {
-        container.writeUnlock();
+      } catch (IOException exception) {
+        LOG.warn(
+            "Deletion operation was not successful for container: " + container
+                .getContainerData().getContainerID());
+      }
+      return crr;
+    }
+
+    public ContainerBackgroundTaskResult deleteViaSchema2(
+        ReferenceCountedDB meta, Container container, File dataDir,
+        long startTime) {
+      ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+      try {
+        Table<String, BlockData> blockDataTable =
+            meta.getStore().getBlockDataTable();
+        DatanodeStore ds = meta.getStore();
+        DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+            (DatanodeStoreSchemaTwoImpl) ds;
+        Table<Long, DeletedBlocksTransaction>
+            delTxTable = dnStoreTwoImpl.getTxnTable();
+        List<DeletedBlocksTransaction> delBlocks = new ArrayList<>();
+        int totalBlocks = 0;
+        try (TableIterator<Long,
+            ? extends Table.KeyValue<Long, DeletedBlocksTransaction>> iter =
+            dnStoreTwoImpl.getTxnTable().iterator()) {
+          while (iter.hasNext() && (totalBlocks < blockLimitPerTask)) {

Review comment:
       Ah yes, I see that now. It seems SCM's `DeletedBlockLogImpl` takes a 
similar approach, where if the first transaction has more blocks than the block 
deleting limit, or the last transaction added puts the number of blocks over 
the limit, it allows it and it will be forwarded to the datanode. Although this 
isn't a super strict interpretation of the block deleting limit, it seems 
consistent with the rest of the code so I think it is ok.       

##########
File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
##########
@@ -158,65 +190,167 @@ private void createToDeleteBlocks(ContainerSet 
containerSet,
     }
     byte[] arr = randomAlphanumeric(1048576).getBytes(UTF_8);
     ChunkBuffer buffer = ChunkBuffer.wrap(ByteBuffer.wrap(arr));
+    int txnID = 0;
     for (int x = 0; x < numOfContainers; x++) {
       long containerID = ContainerTestHelper.getTestContainerID();
       KeyValueContainerData data =
           new KeyValueContainerData(containerID, layout,
               ContainerTestHelper.CONTAINER_MAX_SIZE,
               UUID.randomUUID().toString(), datanodeUuid);
       data.closeContainer();
+      data.setSchemaVersion(schemaVersion);
       KeyValueContainer container = new KeyValueContainer(data, conf);
       container.create(volumeSet,
           new RoundRobinVolumeChoosingPolicy(), scmId);
       containerSet.addContainer(container);
       data = (KeyValueContainerData) containerSet.getContainer(
           containerID).getContainerData();
-      long chunkLength = 100;
-      try(ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
-        for (int j = 0; j < numOfBlocksPerContainer; j++) {
+      if (data.getSchemaVersion().equals(SCHEMA_V1)) {
+        try(ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {

Review comment:
       Sorry for the confusing explanation. I just meant to suggest that lines 
210-226 be moved to a method called something like 
`createPendingDeleteBlocksSchema1` and lines 229-262 be moved to a method 
called `createPendingDeleteBlocksSchema2`.

##########
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java
##########
@@ -88,4 +88,10 @@ protected DatanodeSchemaOneDBDefinition(String dbPath) {
       getDeletedBlocksColumnFamily() {
     return DELETED_BLOCKS;
   }
+
+  @Override
+  public DBColumnFamilyDefinition[] getColumnFamilies() {
+    return new DBColumnFamilyDefinition[] {getBlockDataColumnFamily(),
+        getMetadataColumnFamily(), getDeletedBlocksColumnFamily() };

Review comment:
       Looks like the method is still here, did you mean to delete it?

##########
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
##########
@@ -289,8 +291,7 @@ public void testBlockDeletionTransactions() throws 
Exception {
           return false;
         }
       }, 1000, 10000);
-      Assert.assertTrue(helper.getAllBlocks(containerIDs).isEmpty());
-
+      Assert.assertTrue(helper.verifyBlocksWithTxnTable(containerBlocks));

Review comment:
       Do we really need a separate Jira for this? Looks like we could just 
check `OzoneConsts#SCHEMA_LATEST` to determine whether to call the new method 
or the old method here. Right now the code has a nice invariant where 
SCHEMA_LATEST can be set to schema version 1 or 2 and everything builds/all 
tests pass/system runs. I don't really think its worth violating this (even 
temporarily) since it requires just a few extra lines here, but let me know if 
supporting this for this test is more complicated and I have missed something.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to