[
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625506&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625506
]
ASF GitHub Bot logged work on HIVE-25115:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 20/Jul/21 12:19
Start Date: 20/Jul/21 12:19
Worklog Time Spent: 10m
Work Description: deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672900959
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList
getValidCleanerWriteIdList(CompactionInfo ci, Tab
assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
ValidReaderWriteIdList validWriteIdList =
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
- boolean delayedCleanupEnabled =
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
- if (delayedCleanupEnabled) {
- /*
- * If delayed cleanup enabled, we need to filter the obsoletes dir list,
to only remove directories that were made obsolete by this compaction
- * If we have a higher retentionTime it is possible for a second
compaction to run on the same partition. Cleaning up the first compaction
- * should not touch the newer obsolete directories to not to violate the
retentionTime for those.
- */
- validWriteIdList =
validWriteIdList.updateHighWatermark(ci.highestWriteId);
- }
+ /*
+ * We need to filter the obsoletes dir list, to only remove directories
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the
retentionTime for those.
+ */
+ validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);
Review comment:
not sure I got the question. but highestWriteId is recorded at the time
when the compaction txn starts, so it records all open txns that have to be
ignored.
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList
getValidCleanerWriteIdList(CompactionInfo ci, Tab
assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
ValidReaderWriteIdList validWriteIdList =
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
- boolean delayedCleanupEnabled =
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
- if (delayedCleanupEnabled) {
- /*
- * If delayed cleanup enabled, we need to filter the obsoletes dir list,
to only remove directories that were made obsolete by this compaction
- * If we have a higher retentionTime it is possible for a second
compaction to run on the same partition. Cleaning up the first compaction
- * should not touch the newer obsolete directories to not to violate the
retentionTime for those.
- */
- validWriteIdList =
validWriteIdList.updateHighWatermark(ci.highestWriteId);
- }
+ /*
+ * We need to filter the obsoletes dir list, to only remove directories
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the
retentionTime for those.
+ */
+ validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);
Review comment:
not sure I got the question. but highestWriteId is recorded at the time
when the compaction txn starts, so it records writeid hwm and all open txns
below it that have to be ignored.
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList
getValidCleanerWriteIdList(CompactionInfo ci, Tab
assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
ValidReaderWriteIdList validWriteIdList =
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
- boolean delayedCleanupEnabled =
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
- if (delayedCleanupEnabled) {
- /*
- * If delayed cleanup enabled, we need to filter the obsoletes dir list,
to only remove directories that were made obsolete by this compaction
- * If we have a higher retentionTime it is possible for a second
compaction to run on the same partition. Cleaning up the first compaction
- * should not touch the newer obsolete directories to not to violate the
retentionTime for those.
- */
- validWriteIdList =
validWriteIdList.updateHighWatermark(ci.highestWriteId);
- }
+ /*
+ * We need to filter the obsoletes dir list, to only remove directories
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the
retentionTime for those.
+ */
+ validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);
Review comment:
validWriteIdList besides the updated hwm has also an exceptions list,
that would show if there are any open txns in that range. What we are doing
here is just lowering the hwm so that cleaner won't remove more than this
compaction was responsible for.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 625506)
Time Spent: 2h 20m (was: 2h 10m)
> Compaction queue entries may accumulate in "ready for cleaning" state
> ---------------------------------------------------------------------
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
> Issue Type: Improvement
> Reporter: Karen Coppage
> Assignee: Denys Kuzmenko
> Priority: Major
> Labels: pull-request-available
> Time Spent: 2h 20m
> Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for
> cleaning" state at the same time, only one "cleaning" will remove obsolete
> files, the other entry will remain in the queue in "ready for cleaning" state.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)