swamirishi commented on code in PR #7934:
URL: https://github.com/apache/ozone/pull/7934#discussion_r2010653077


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java:
##########
@@ -108,6 +121,20 @@ public byte[] 
unpackContainerData(Container<KeyValueContainerData> container,
     return descriptorFileContent;
   }
 
+  private void persistCustomContainerState(Container<KeyValueContainerData> 
container, byte[] descriptorContent,

Review Comment:
   Why explicitly Container<KeyValueContainerData> ? Can this just not be any 
Container<? extends ContainerData>?



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaThreeImpl.java:
##########
@@ -132,17 +138,61 @@ public void dumpKVContainerData(long containerID, File 
dumpDir)
   }
 
   public void loadKVContainerData(File dumpDir)
-      throws IOException {
-    getMetadataTable().loadFromFile(
-        getTableDumpFile(getMetadataTable(), dumpDir));
-    getBlockDataTable().loadFromFile(
-        getTableDumpFile(getBlockDataTable(), dumpDir));
-    if 
(VersionedDatanodeFeatures.isFinalized(HDDSLayoutFeature.HBASE_SUPPORT)) {
-      getLastChunkInfoTable().loadFromFile(
-          getTableDumpFile(getLastChunkInfoTable(), dumpDir));
+      throws IOException, RocksDBException {

Review Comment:
   nit: Do you want to throw RockDBException here? Why not wrap 
RocksDBException inside IOException? Technically RocksDBException for ozone is 
actually like an IO error.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaThreeImpl.java:
##########
@@ -132,17 +138,61 @@ public void dumpKVContainerData(long containerID, File 
dumpDir)
   }
 
   public void loadKVContainerData(File dumpDir)
-      throws IOException {
-    getMetadataTable().loadFromFile(
-        getTableDumpFile(getMetadataTable(), dumpDir));
-    getBlockDataTable().loadFromFile(
-        getTableDumpFile(getBlockDataTable(), dumpDir));
-    if 
(VersionedDatanodeFeatures.isFinalized(HDDSLayoutFeature.HBASE_SUPPORT)) {
-      getLastChunkInfoTable().loadFromFile(
-          getTableDumpFile(getLastChunkInfoTable(), dumpDir));
+      throws IOException, RocksDBException {
+
+    try (BatchOperation batch = getBatchHandler().initBatchOperation()) {
+      processTable(batch, getTableDumpFile(getMetadataTable(), dumpDir),
+          getDbDef().getMetadataColumnFamily().getKeyCodec(),
+          getDbDef().getMetadataColumnFamily().getValueCodec(),
+          getMetadataTable());
+      processTable(batch, getTableDumpFile(getBlockDataTable(), dumpDir),
+          getDbDef().getBlockDataColumnFamily().getKeyCodec(),
+          getDbDef().getBlockDataColumnFamily().getValueCodec(),
+          getBlockDataTable());
+      if 
(VersionedDatanodeFeatures.isFinalized(HDDSLayoutFeature.HBASE_SUPPORT)) {
+        processTable(batch, getTableDumpFile(getLastChunkInfoTable(), dumpDir),
+            getDbDef().getLastChunkInfoColumnFamily().getKeyCodec(),
+            getDbDef().getLastChunkInfoColumnFamily().getValueCodec(),
+            getLastChunkInfoTable());
+      }
+      processTable(batch, getTableDumpFile(getDeleteTransactionTable(), 
dumpDir),
+          
((DatanodeSchemaThreeDBDefinition)getDbDef()).getDeleteTransactionsColumnFamily().getKeyCodec(),
+          
((DatanodeSchemaThreeDBDefinition)getDbDef()).getDeleteTransactionsColumnFamily().getValueCodec(),
+          getDeleteTransactionTable());
+
+      getStore().commitBatchOperation(batch);
+    }
+  }
+
+  private <K, V> void processTable(BatchOperation batch, File tableDumpFile,
+      Codec<K> keyCodec, Codec<V> valueCodec, Table<K, V> table) throws 
IOException, RocksDBException {
+    if (isFileEmpty(tableDumpFile)) {
+      LOG.debug("SST File {} is empty. Skipping processing.", 
tableDumpFile.getAbsolutePath());
+      return;
+    }
+
+    try (ManagedOptions managedOptions = new ManagedOptions();
+         ManagedSstFileReader sstFileReader = new 
ManagedSstFileReader(managedOptions)) {
+      sstFileReader.open(tableDumpFile.getAbsolutePath());
+      try (ManagedReadOptions managedReadOptions = new ManagedReadOptions();
+           ManagedSstFileReaderIterator iterator =
+               
ManagedSstFileReaderIterator.managed(sstFileReader.newIterator(managedReadOptions)))
 {

Review Comment:
   Can you confirm this? How we are generating this sst file.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaThreeImpl.java:
##########
@@ -132,17 +138,61 @@ public void dumpKVContainerData(long containerID, File 
dumpDir)
   }
 
   public void loadKVContainerData(File dumpDir)
-      throws IOException {
-    getMetadataTable().loadFromFile(
-        getTableDumpFile(getMetadataTable(), dumpDir));
-    getBlockDataTable().loadFromFile(
-        getTableDumpFile(getBlockDataTable(), dumpDir));
-    if 
(VersionedDatanodeFeatures.isFinalized(HDDSLayoutFeature.HBASE_SUPPORT)) {
-      getLastChunkInfoTable().loadFromFile(
-          getTableDumpFile(getLastChunkInfoTable(), dumpDir));
+      throws IOException, RocksDBException {
+
+    try (BatchOperation batch = getBatchHandler().initBatchOperation()) {
+      processTable(batch, getTableDumpFile(getMetadataTable(), dumpDir),
+          getDbDef().getMetadataColumnFamily().getKeyCodec(),
+          getDbDef().getMetadataColumnFamily().getValueCodec(),
+          getMetadataTable());
+      processTable(batch, getTableDumpFile(getBlockDataTable(), dumpDir),
+          getDbDef().getBlockDataColumnFamily().getKeyCodec(),
+          getDbDef().getBlockDataColumnFamily().getValueCodec(),
+          getBlockDataTable());
+      if 
(VersionedDatanodeFeatures.isFinalized(HDDSLayoutFeature.HBASE_SUPPORT)) {
+        processTable(batch, getTableDumpFile(getLastChunkInfoTable(), dumpDir),
+            getDbDef().getLastChunkInfoColumnFamily().getKeyCodec(),
+            getDbDef().getLastChunkInfoColumnFamily().getValueCodec(),
+            getLastChunkInfoTable());
+      }
+      processTable(batch, getTableDumpFile(getDeleteTransactionTable(), 
dumpDir),
+          
((DatanodeSchemaThreeDBDefinition)getDbDef()).getDeleteTransactionsColumnFamily().getKeyCodec(),
+          
((DatanodeSchemaThreeDBDefinition)getDbDef()).getDeleteTransactionsColumnFamily().getValueCodec(),
+          getDeleteTransactionTable());
+
+      getStore().commitBatchOperation(batch);
+    }
+  }
+
+  private <K, V> void processTable(BatchOperation batch, File tableDumpFile,
+      Codec<K> keyCodec, Codec<V> valueCodec, Table<K, V> table) throws 
IOException, RocksDBException {
+    if (isFileEmpty(tableDumpFile)) {
+      LOG.debug("SST File {} is empty. Skipping processing.", 
tableDumpFile.getAbsolutePath());
+      return;
+    }
+
+    try (ManagedOptions managedOptions = new ManagedOptions();
+         ManagedSstFileReader sstFileReader = new 
ManagedSstFileReader(managedOptions)) {
+      sstFileReader.open(tableDumpFile.getAbsolutePath());
+      try (ManagedReadOptions managedReadOptions = new ManagedReadOptions();
+           ManagedSstFileReaderIterator iterator =
+               
ManagedSstFileReaderIterator.managed(sstFileReader.newIterator(managedReadOptions)))
 {

Review Comment:
   Are we sure there are no tombstones in this sst file?



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java:
##########
@@ -108,6 +121,20 @@ public byte[] 
unpackContainerData(Container<KeyValueContainerData> container,
     return descriptorFileContent;
   }
 
+  private void persistCustomContainerState(Container<KeyValueContainerData> 
container, byte[] descriptorContent,
+      ContainerProtos.ContainerDataProto.State state, Path 
containerMetadataPath) throws IOException {
+    if (descriptorContent == null) {
+      LOG.warn("Skipping persisting of custom state. Container descriptor is 
null for container {}",
+          container.getContainerData().getContainerID());
+      return;
+    }
+
+    KeyValueContainerData originalContainerData =
+        (KeyValueContainerData) 
ContainerDataYaml.readContainer(descriptorContent);

Review Comment:
   We needn't make this KeyValueContainerData specific. Do you think we can 
actually move this function ContainerPacker interface itself and make it a 
default function?



##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestContainerReplication.java:
##########
@@ -282,6 +284,43 @@ private static void deleteContainer(MiniOzoneCluster 
cluster, DatanodeDetails dn
   }
 
 
+  @Test
+  public void testImportedContainerIsClosed() throws Exception {

Review Comment:
   Can we have miniOzoneCluster test case for the container cleanup on DN 
restart?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to