[
https://issues.apache.org/jira/browse/HIVE-26704?focusedWorklogId=847832&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-847832
]
ASF GitHub Bot logged work on HIVE-26704:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 27/Feb/23 14:01
Start Date: 27/Feb/23 14:01
Worklog Time Spent: 10m
Work Description: deniskuzZ commented on code in PR #3576:
URL: https://github.com/apache/hive/pull/3576#discussion_r1118777607
##########
ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java:
##########
@@ -140,41 +140,36 @@ public void run() {
HiveConf.ConfVars.HIVE_COMPACTOR_CLEANER_DURATION_UPDATE_INTERVAL,
TimeUnit.MILLISECONDS),
new
CleanerCycleUpdater(MetricsConstants.COMPACTION_CLEANER_CYCLE_DURATION,
startedAt));
}
-
long minOpenTxnId = txnHandler.findMinOpenTxnIdForCleaner();
-
checkInterrupt();
List<CompactionInfo> readyToClean =
txnHandler.findReadyToClean(minOpenTxnId, retentionTime);
-
checkInterrupt();
if (!readyToClean.isEmpty()) {
- long minTxnIdSeenOpen = txnHandler.findMinTxnIdSeenOpen();
- final long cleanerWaterMark =
- minTxnIdSeenOpen < 0 ? minOpenTxnId : Math.min(minOpenTxnId,
minTxnIdSeenOpen);
-
- LOG.info("Cleaning based on min open txn id: " + cleanerWaterMark);
List<CompletableFuture<Void>> cleanerList = new ArrayList<>();
// For checking which compaction can be cleaned we can use the
minOpenTxnId
// However findReadyToClean will return all records that were
compacted with old version of HMS
// where the CQ_NEXT_TXN_ID is not set. For these compactions we
need to provide minTxnIdSeenOpen
// to the clean method, to avoid cleaning up deltas needed for
running queries
// when min_history_level is finally dropped, than every HMS will
commit compaction the new way
// and minTxnIdSeenOpen can be removed and minOpenTxnId can be
used instead.
- for (CompactionInfo compactionInfo : readyToClean) {
-
+ for (CompactionInfo ci : readyToClean) {
//Check for interruption before scheduling each compactionInfo
and return if necessary
checkInterrupt();
-
+
CompletableFuture<Void> asyncJob =
CompletableFuture.runAsync(
- ThrowingRunnable.unchecked(() ->
clean(compactionInfo, cleanerWaterMark, metricsEnabled)),
- cleanerExecutor)
- .exceptionally(t -> {
- LOG.error("Error clearing {}",
compactionInfo.getFullPartitionName(), t);
- return null;
- });
+ ThrowingRunnable.unchecked(() -> {
+ long minOpenTxnGLB = (ci.minOpenWriteId > 0) ?
Review Comment:
fixed
##########
ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java:
##########
@@ -395,6 +399,32 @@ private boolean allowOperationInATransaction(QueryPlan
queryPlan) {
return false;
}
+ @Override
+ public void addWriteIdsToMinHistory(QueryPlan plan, ValidTxnWriteIdList
txnWriteIds) {
+ if (plan.getInputs().isEmpty()) {
+ return;
+ }
+ Map<String, Long> writeIds = plan.getInputs().stream()
+ .filter(input -> !input.isDummy() &&
AcidUtils.isTransactionalTable(input.getTable()))
+ .map(input -> input.getTable().getFullyQualifiedName())
+ .collect(Collectors.toSet()).stream()
Review Comment:
fixed
Issue Time Tracking
-------------------
Worklog Id: (was: 847832)
Time Spent: 4h 40m (was: 4.5h)
> Cleaner shouldn't be blocked by global min open txnId
> -----------------------------------------------------
>
> Key: HIVE-26704
> URL: https://issues.apache.org/jira/browse/HIVE-26704
> Project: Hive
> Issue Type: Task
> Reporter: Denys Kuzmenko
> Assignee: Denys Kuzmenko
> Priority: Major
> Labels: pull-request-available
> Time Spent: 4h 40m
> Remaining Estimate: 0h
>
> *Single transaction blocks cluster-wide Cleaner operations*
> Currently, if there is a single long-running transaction that can prevent the
> Cleaner to clean up any tables. This causes file buildup in tables, which can
> cause performance penalties when listing the directories (note that the
> compaction is not blocked by this, so unnecessary data is not read, but the
> files remain there which causes performance penalty).
> We can reduce the protected files from the open transaction if we have
> query-table correlation data stored in the backend DB, but this change will
> need the current method of recording that detail to be revisited.
> The naive and somewhat backward-compatible approach is to capture the
> minOpenWriteIds per table. It involves a non-mutation operation (as in, there
> is no need for the HMS DB to wait for another user’s operation to record it).
> This does spew data writes into the HMS backend DB, but this is a blind
> insert operation that can be group-committed across many users.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)