pvargacl commented on a change in pull request #1716:
URL: https://github.com/apache/hive/pull/1716#discussion_r534827989



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -316,6 +314,30 @@ private boolean removeFiles(String location, 
ValidWriteIdList writeIdList, Compa
       }
       fs.delete(dead, true);
     }
-    return true;
+    // Check if there will be more obsolete directories to clean when 
possible. We will only mark cleaned when this
+    // number reaches 0.
+    return getNumEventuallyObsoleteDirs(location, dirSnapshots) == 0;
+  }
+
+  /**
+   * Get the number of base/delta directories the Cleaner should remove 
eventually. If we check this after cleaning
+   * we can see if the Cleaner has further work to do in this table/partition 
directory that it hasn't been able to
+   * finish, e.g. because of an open transaction at the time of compaction.
+   * We do this by assuming that there are no open transactions anywhere and 
then calling getAcidState. If there are
+   * obsolete directories, then the Cleaner has more work to do.
+   * @param location location of table
+   * @return number of dirs left for the cleaner to clean – eventually
+   * @throws IOException
+   */
+  private int getNumEventuallyObsoleteDirs(String location, Map<Path, 
AcidUtils.HdfsDirSnapshot> dirSnapshots)
+      throws IOException {
+    ValidTxnList validTxnList = new ValidReadTxnList();
+    //save it so that getAcidState() sees it
+    conf.set(ValidTxnList.VALID_TXNS_KEY, validTxnList.writeToString());
+    ValidReaderWriteIdList validWriteIdList = new ValidReaderWriteIdList();
+    Path locPath = new Path(location);
+    AcidUtils.Directory dir = 
AcidUtils.getAcidState(locPath.getFileSystem(conf), locPath, conf, 
validWriteIdList,
+        Ref.from(false), false, dirSnapshots);
+    return dir.getObsolete().size();

Review comment:
       New deltas by themselves don't stop marking the compaction cleaned (they 
are not in obsolete list), but overlapping compactions do, I think problematic 
scenario could be if there are some long running ETL jobs also next to your 
streaming writes. That would mean the that cleaning jobs would pile up, they 
would run continuously, but the low values in min_history_level would prevent 
them to clean every obsolete up and ever marked as cleaned :(




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to