smengcl commented on code in PR #4045:
URL: https://github.com/apache/ozone/pull/4045#discussion_r1050312769


##########
hadoop-hdds/rocksdb-checkpoint-differ/src/main/java/org/apache/ozone/rocksdiff/RocksDBCheckpointDiffer.java:
##########
@@ -939,6 +978,114 @@ private void populateCompactionDAG(List<String> 
inputFiles,
 
   }
 
+  /**
+   * This is the task definition which is run periodically by the service
+   * executor at fixed delay.
+   * It looks for snapshots in compaction DAG which are older than the allowed
+   * time to be in compaction DAG and removes them from the DAG.
+   */
+  public void pruneOlderSnapshotsWithCompactionHistory() {
+    String snapshotDir = null;
+    long currentTimeMillis = System.currentTimeMillis();
+
+    while (!snapshots.isEmpty() &&
+        (currentTimeMillis - snapshots.peek().getLeft())
+            > maxAllowedTimeInDag) {
+      snapshotDir = snapshots.poll().getRight();
+    }
+
+    if (snapshotDir != null) {
+      pruneSnapshotFileNodesFromDag(snapshotDir);
+    }
+  }
+
+  /**
+   * Prunes forward and backward DAGs when oldest snapshot with compaction
+   * history gets deleted.
+   */
+  public void pruneSnapshotFileNodesFromDag(String snapshotDir) {
+    Set<String> snapshotSstFiles = readRocksDBLiveFiles(snapshotDir);

Review Comment:
   @GeorgeJahad Yup we have a jira for that: HDDS-7601. Though it is not being 
done yet. The plan is to have an new interface (e.g. `OmDBMetaInfoManager`) 
that has methods for compaction log operations. It might happen after the merge 
to master branch.



##########
hadoop-hdds/rocksdb-checkpoint-differ/src/main/java/org/apache/ozone/rocksdiff/RocksDBCheckpointDiffer.java:
##########
@@ -939,6 +978,114 @@ private void populateCompactionDAG(List<String> 
inputFiles,
 
   }
 
+  /**
+   * This is the task definition which is run periodically by the service
+   * executor at fixed delay.
+   * It looks for snapshots in compaction DAG which are older than the allowed
+   * time to be in compaction DAG and removes them from the DAG.
+   */
+  public void pruneOlderSnapshotsWithCompactionHistory() {
+    String snapshotDir = null;
+    long currentTimeMillis = System.currentTimeMillis();
+
+    while (!snapshots.isEmpty() &&
+        (currentTimeMillis - snapshots.peek().getLeft())
+            > maxAllowedTimeInDag) {
+      snapshotDir = snapshots.poll().getRight();
+    }
+
+    if (snapshotDir != null) {
+      pruneSnapshotFileNodesFromDag(snapshotDir);
+    }
+  }
+
+  /**
+   * Prunes forward and backward DAGs when oldest snapshot with compaction
+   * history gets deleted.
+   */
+  public void pruneSnapshotFileNodesFromDag(String snapshotDir) {
+    Set<String> snapshotSstFiles = readRocksDBLiveFiles(snapshotDir);

Review Comment:
   @GeorgeJahad Yup we have a jira for that: HDDS-7601. Though it is not being 
done yet. The plan is to have a new interface (e.g. `OmDBMetaInfoManager`) that 
has methods for compaction log operations. It might happen after the merge to 
master branch.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to