GeorgeJahad commented on code in PR #3786:
URL: https://github.com/apache/ozone/pull/3786#discussion_r992671793
##########
hadoop-hdds/rocksdb-checkpoint-differ/src/main/java/org/apache/ozone/rocksdiff/RocksDBCheckpointDiffer.java:
##########
@@ -247,9 +300,30 @@ public void setRocksDBForCompactionTracking(
public void onCompactionCompleted(
final RocksDB db, final CompactionJobInfo compactionJobInfo) {
synchronized (db) {
+
LOG.warn(compactionJobInfo.compactionReason().toString());
LOG.warn("List of input files:");
+
+ if (compactionJobInfo.inputFiles().size() == 0) {
+ LOG.error("Compaction input files list is empty?");
+ return;
+ }
+
+ final StringBuilder sb = new StringBuilder();
+
+ // kLevelL0FilesNum / kLevelMaxLevelSize. TODO: REMOVE
+ sb.append("#
").append(compactionJobInfo.compactionReason()).append('\n');
+
+ // Trim DB path, only keep the SST file name
+ final int filenameBegin =
+ compactionJobInfo.inputFiles().get(0).lastIndexOf("/");
+
for (String file : compactionJobInfo.inputFiles()) {
+ final String fn = file.substring(filenameBegin + 1);
+ sb.append(fn).append('\t'); // TODO: Trim last delimiter
+
+ // Create hardlink backups for the SST files that are going
Review Comment:
Yeh, you may be right that it isn't necessary to do it in
onCompletionBegin(), (that compaction_flush code is quite confusing, so I can't
tell.) But I do think it is safer, (and more future proof,) without adding
significant complexity. So I am happy to see the links created there.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]