pvargacl commented on a change in pull request #1716:
URL: https://github.com/apache/hive/pull/1716#discussion_r534342957
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -316,6 +314,30 @@ private boolean removeFiles(String location,
ValidWriteIdList writeIdList, Compa
}
fs.delete(dead, true);
}
- return true;
+ // Check if there will be more obsolete directories to clean when
possible. We will only mark cleaned when this
+ // number reaches 0.
+ return getNumEventuallyObsoleteDirs(location, dirSnapshots) == 0;
+ }
+
+ /**
+ * Get the number of base/delta directories the Cleaner should remove
eventually. If we check this after cleaning
+ * we can see if the Cleaner has further work to do in this table/partition
directory that it hasn't been able to
+ * finish, e.g. because of an open transaction at the time of compaction.
+ * We do this by assuming that there are no open transactions anywhere and
then calling getAcidState. If there are
+ * obsolete directories, then the Cleaner has more work to do.
+ * @param location location of table
+ * @return number of dirs left for the cleaner to clean – eventually
+ * @throws IOException
+ */
+ private int getNumEventuallyObsoleteDirs(String location, Map<Path,
AcidUtils.HdfsDirSnapshot> dirSnapshots)
+ throws IOException {
+ ValidTxnList validTxnList = new ValidReadTxnList();
+ //save it so that getAcidState() sees it
+ conf.set(ValidTxnList.VALID_TXNS_KEY, validTxnList.writeToString());
+ ValidReaderWriteIdList validWriteIdList = new ValidReaderWriteIdList();
+ Path locPath = new Path(location);
+ AcidUtils.Directory dir =
AcidUtils.getAcidState(locPath.getFileSystem(conf), locPath, conf,
validWriteIdList,
+ Ref.from(false), false, dirSnapshots);
+ return dir.getObsolete().size();
Review comment:
Case 1: If HIVE-23107 and the following are there I think none of these
checks are necessary, because we can be sure, that Cleaner was running when it
could delete everything it can. Also if delayed cleaning is enabled it is
guaranteed, that it will never delete any more obsolete directories no matter
how many times it is running (see:
validWriteIdList.updateHighWatermark(ci.highestWriteId)). If we must choose,
checking if anything was removed does less damage
Case 2: If those fixes are not there I think checking for obsolete files is
better than checking if anything was removed
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]