[
https://issues.apache.org/jira/browse/HIVE-26704?focusedWorklogId=848396&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-848396
]
ASF GitHub Bot logged work on HIVE-26704:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 01/Mar/23 16:25
Start Date: 01/Mar/23 16:25
Worklog Time Spent: 10m
Work Description: deniskuzZ commented on code in PR #3576:
URL: https://github.com/apache/hive/pull/3576#discussion_r1121822320
##########
ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java:
##########
@@ -140,41 +140,36 @@ public void run() {
HiveConf.ConfVars.HIVE_COMPACTOR_CLEANER_DURATION_UPDATE_INTERVAL,
TimeUnit.MILLISECONDS),
new
CleanerCycleUpdater(MetricsConstants.COMPACTION_CLEANER_CYCLE_DURATION,
startedAt));
}
-
long minOpenTxnId = txnHandler.findMinOpenTxnIdForCleaner();
-
checkInterrupt();
List<CompactionInfo> readyToClean =
txnHandler.findReadyToClean(minOpenTxnId, retentionTime);
-
checkInterrupt();
if (!readyToClean.isEmpty()) {
- long minTxnIdSeenOpen = txnHandler.findMinTxnIdSeenOpen();
- final long cleanerWaterMark =
- minTxnIdSeenOpen < 0 ? minOpenTxnId : Math.min(minOpenTxnId,
minTxnIdSeenOpen);
-
- LOG.info("Cleaning based on min open txn id: " + cleanerWaterMark);
List<CompletableFuture<Void>> cleanerList = new ArrayList<>();
// For checking which compaction can be cleaned we can use the
minOpenTxnId
// However findReadyToClean will return all records that were
compacted with old version of HMS
// where the CQ_NEXT_TXN_ID is not set. For these compactions we
need to provide minTxnIdSeenOpen
// to the clean method, to avoid cleaning up deltas needed for
running queries
// when min_history_level is finally dropped, than every HMS will
commit compaction the new way
// and minTxnIdSeenOpen can be removed and minOpenTxnId can be
used instead.
- for (CompactionInfo compactionInfo : readyToClean) {
-
+ for (CompactionInfo ci : readyToClean) {
//Check for interruption before scheduling each compactionInfo
and return if necessary
checkInterrupt();
-
+
CompletableFuture<Void> asyncJob =
CompletableFuture.runAsync(
- ThrowingRunnable.unchecked(() ->
clean(compactionInfo, cleanerWaterMark, metricsEnabled)),
- cleanerExecutor)
- .exceptionally(t -> {
- LOG.error("Error clearing {}",
compactionInfo.getFullPartitionName(), t);
- return null;
- });
+ ThrowingRunnable.unchecked(() -> {
+ long minOpenTxn = (ci.minOpenWriteId > 0) ?
+ ci.nextTxnId + 1 : Math.min(minOpenTxnId,
txnHandler.findMinTxnIdSeenOpen());
Review Comment:
yea, missed that, if we have minOpenWriteId, we shouldn't even call
findMinTxnIdSeenOpen
Issue Time Tracking
-------------------
Worklog Id: (was: 848396)
Time Spent: 6h (was: 5h 50m)
> Cleaner shouldn't be blocked by global min open txnId
> -----------------------------------------------------
>
> Key: HIVE-26704
> URL: https://issues.apache.org/jira/browse/HIVE-26704
> Project: Hive
> Issue Type: Task
> Reporter: Denys Kuzmenko
> Assignee: Denys Kuzmenko
> Priority: Major
> Labels: pull-request-available
> Time Spent: 6h
> Remaining Estimate: 0h
>
> *Single transaction blocks cluster-wide Cleaner operations*
> Currently, if there is a single long-running transaction that can prevent the
> Cleaner to clean up any tables. This causes file buildup in tables, which can
> cause performance penalties when listing the directories (note that the
> compaction is not blocked by this, so unnecessary data is not read, but the
> files remain there which causes performance penalty).
> We can reduce the protected files from the open transaction if we have
> query-table correlation data stored in the backend DB, but this change will
> need the current method of recording that detail to be revisited.
> The naive and somewhat backward-compatible approach is to capture the
> minOpenWriteIds per table. It involves a non-mutation operation (as in, there
> is no need for the HMS DB to wait for another user’s operation to record it).
> This does spew data writes into the HMS backend DB, but this is a blind
> insert operation that can be group-committed across many users.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)