pvargacl commented on a change in pull request #1716:
URL: https://github.com/apache/hive/pull/1716#discussion_r534995678
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -316,6 +314,30 @@ private boolean removeFiles(String location,
ValidWriteIdList writeIdList, Compa
}
fs.delete(dead, true);
}
- return true;
+ // Check if there will be more obsolete directories to clean when
possible. We will only mark cleaned when this
+ // number reaches 0.
+ return getNumEventuallyObsoleteDirs(location, dirSnapshots) == 0;
+ }
+
+ /**
+ * Get the number of base/delta directories the Cleaner should remove
eventually. If we check this after cleaning
+ * we can see if the Cleaner has further work to do in this table/partition
directory that it hasn't been able to
+ * finish, e.g. because of an open transaction at the time of compaction.
+ * We do this by assuming that there are no open transactions anywhere and
then calling getAcidState. If there are
+ * obsolete directories, then the Cleaner has more work to do.
+ * @param location location of table
+ * @return number of dirs left for the cleaner to clean – eventually
+ * @throws IOException
+ */
+ private int getNumEventuallyObsoleteDirs(String location, Map<Path,
AcidUtils.HdfsDirSnapshot> dirSnapshots)
+ throws IOException {
+ ValidTxnList validTxnList = new ValidReadTxnList();
+ //save it so that getAcidState() sees it
+ conf.set(ValidTxnList.VALID_TXNS_KEY, validTxnList.writeToString());
+ ValidReaderWriteIdList validWriteIdList = new ValidReaderWriteIdList();
+ Path locPath = new Path(location);
+ AcidUtils.Directory dir =
AcidUtils.getAcidState(locPath.getFileSystem(conf), locPath, conf,
validWriteIdList,
+ Ref.from(false), false, dirSnapshots);
+ return dir.getObsolete().size();
Review comment:
In upstream it would cause problems, for the delayed cleaner it is
necessary that every compaction entry cleans only its own obsoletes. The
default is every compaction is delayed by 15 minutes, so it is much more likely
when a cleaning job finally runs, there are already some other jobs in the
queue waiting. all of them contains highestwriteId saved, and they will clean
only up until that. You should not merge those.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]