jojochuang commented on code in PR #9129:
URL: https://github.com/apache/ozone/pull/9129#discussion_r2427411908


##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/SnapshotCache.java:
##########
@@ -151,7 +151,6 @@ public void invalidate(UUID key) {
         LOG.warn("SnapshotId: '{}' does not exist in snapshot cache.", k);
       } else {
         try {
-          v.get().getMetadataManager().getStore().flushDB();

Review Comment:
   i believe we wanted to flush the DB before eviction? this is specifically 
used by transferSnapshotData() to ensure snapshot db is flushed before transfer.



##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMDbCheckpointServletInodeBasedXfer.java:
##########
@@ -377,7 +377,6 @@ public void testSnapshotDBConsistency() throws Exception {
     }
     Path snapshotDbDir = Paths.get(newDbDir.toPath().toString(), 
OM_SNAPSHOT_CHECKPOINT_DIR,
         OM_DB_NAME + "-" + snapshotToModify.getSnapshotId());
-    deleteWalFiles(snapshotDbDir);

Review Comment:
   Curious: why remove deleting WAL files?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServletInodeBasedXfer.java:
##########
@@ -243,22 +245,26 @@ public void writeDbDataToStream(HttpServletRequest 
request, OutputStream destina
       if (shouldContinue) {
         // we finished transferring files from snapshot DB's by now and
         // this is the last step where we transfer the active om.db contents
-        checkpoint = createAndPrepareCheckpoint(tmpdir, true);
-        // unlimited files as we want the Active DB contents to be transferred 
in a single batch
-        maxTotalSstSize.set(Long.MAX_VALUE);
-        Path checkpointDir = checkpoint.getCheckpointLocation();
-        writeDBToArchive(sstFilesToExclude, checkpointDir,
-            maxTotalSstSize, archiveOutputStream, tmpdir, hardLinkFileMap, 
false);
-        if (includeSnapshotData) {
-          Path tmpCompactionLogDir = 
tmpdir.resolve(getCompactionLogDir().getFileName());
-          Path tmpSstBackupDir = 
tmpdir.resolve(getSstBackupDir().getFileName());
-          writeDBToArchive(sstFilesToExclude, tmpCompactionLogDir, 
maxTotalSstSize, archiveOutputStream, tmpdir,
-              hardLinkFileMap, getCompactionLogDir(), false);
-          writeDBToArchive(sstFilesToExclude, tmpSstBackupDir, 
maxTotalSstSize, archiveOutputStream, tmpdir,
-              hardLinkFileMap, getSstBackupDir(), false);
-          // This is done to ensure all data to be copied correctly is flushed 
in the snapshot DB
-          transferSnapshotData(sstFilesToExclude, tmpdir, snapshotPaths, 
maxTotalSstSize,
-              archiveOutputStream, hardLinkFileMap);
+        SnapshotCache snapshotCache = 
om.getOmSnapshotManager().getSnapshotCache();
+        // lock is null when includeSnapshotData=false, no snapshot 
consistency needed
+        try (UncheckedAutoCloseableSupplier<OMLockDetails> lock = 
includeSnapshotData ? snapshotCache.lock() : null) {

Review Comment:
   Just to confirm: this is a reentrant lock, correct? Later inside 
transferSnapshotData() it requires a read lock on SNAPSHOT_DB_LOCK.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to