[
https://issues.apache.org/jira/browse/HIVE-27020?focusedWorklogId=857248&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-857248
]
ASF GitHub Bot logged work on HIVE-27020:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 16/Apr/23 18:12
Start Date: 16/Apr/23 18:12
Worklog Time Spent: 10m
Work Description: SourabhBadhya commented on code in PR #4091:
URL: https://github.com/apache/hive/pull/4091#discussion_r1167991250
##########
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactorWithAbortCleanupUsingCompactionCycle.java:
##########
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.txn.compactor;
+
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.ql.txn.compactor.TestCompactor;
+import org.junit.Before;
+
+public class TestCompactorWithAbortCleanupUsingCompactionCycle extends
TestCompactor {
Review Comment:
This test class will be removed once it's known that this feature is stable.
A separate task will be created for removing this use case.
##########
ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java:
##########
@@ -61,12 +60,10 @@ public void init(AtomicBoolean stop) throws Exception {
cleanerExecutor = CompactorUtil.createExecutorWithThreadFactory(
conf.getIntVar(HiveConf.ConfVars.HIVE_COMPACTOR_CLEANER_THREADS_NUM),
COMPACTOR_CLEANER_THREAD_NAME_FORMAT);
- if (CollectionUtils.isEmpty(cleanupHandlers)) {
- FSRemover fsRemover = new FSRemover(conf,
ReplChangeManager.getInstance(conf), metadataCache);
- cleanupHandlers = TaskHandlerFactory.getInstance()
- .getHandlers(conf, txnHandler, metadataCache,
- metricsEnabled, fsRemover);
- }
+ FSRemover fsRemover = new FSRemover(conf,
ReplChangeManager.getInstance(conf), metadataCache);
+ cleanupHandlers = TaskHandlerFactory.getInstance()
+ .getHandlers(conf, txnHandler, metadataCache,
+ metricsEnabled, fsRemover);
Review Comment:
Done
##########
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java:
##########
@@ -649,6 +649,10 @@ public enum ConfVars {
COMPACTOR_CLEANER_TABLECACHE_ON("metastore.compactor.cleaner.tablecache.on",
"hive.compactor.cleaner.tablecache.on", true,
"Enable table caching in the cleaner. Currently the cache is
cleaned after each cycle."),
+
COMPACTOR_CLEAN_ABORTS_USING_CLEANER("metastore.compactor.clean.aborts.using.cleaner",
"hive.compactor.clean.aborts.using.cleaner", true,
Review Comment:
The plan is to keep this config till some point until we know that abort
cleanup is stable. If there are any issues with the given handler, we can use
the compaction cycle. I will create a task once its determined that this
feature is stable and we can remove this feature flag and associated logic as
well.
##########
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java:
##########
@@ -162,31 +162,33 @@ public Set<CompactionInfo> findPotentialCompactions(int
abortedThreshold,
}
rs.close();
- // Check for aborted txns: number of aborted txns past threshold and
age of aborted txns
- // past time threshold
- boolean checkAbortedTimeThreshold = abortedTimeThreshold >= 0;
- String sCheckAborted = "SELECT \"TC_DATABASE\", \"TC_TABLE\",
\"TC_PARTITION\", " +
- "MIN(\"TXN_STARTED\"), COUNT(*) FROM \"TXNS\", \"TXN_COMPONENTS\" " +
- " WHERE \"TXN_ID\" = \"TC_TXNID\" AND \"TXN_STATE\" = " +
TxnStatus.ABORTED + " " +
- "GROUP BY \"TC_DATABASE\", \"TC_TABLE\", \"TC_PARTITION\" " +
- (checkAbortedTimeThreshold ? "" : " HAVING COUNT(*) > " +
abortedThreshold);
-
- LOG.debug("Going to execute query <{}>", sCheckAborted);
- rs = stmt.executeQuery(sCheckAborted);
- long systemTime = System.currentTimeMillis();
- while (rs.next()) {
- boolean pastTimeThreshold =
- checkAbortedTimeThreshold && rs.getLong(4) +
abortedTimeThreshold < systemTime;
- int numAbortedTxns = rs.getInt(5);
- if (numAbortedTxns > abortedThreshold || pastTimeThreshold) {
- CompactionInfo info = new CompactionInfo();
- info.dbname = rs.getString(1);
- info.tableName = rs.getString(2);
- info.partName = rs.getString(3);
- info.tooManyAborts = numAbortedTxns > abortedThreshold;
- info.hasOldAbort = pastTimeThreshold;
- LOG.debug("Found potential compaction: {}", info);
- response.add(info);
+ if (!MetastoreConf.getBoolVar(conf,
ConfVars.COMPACTOR_CLEAN_ABORTS_USING_CLEANER)) {
Review Comment:
The plan is to keep this config till some point until we know that abort
cleanup is stable. If there are any issues with the given handler, we can use
the compaction cycle. I will create a task once its determined that this
feature is stable and we can remove this feature flag then.
##########
ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/handler/AbortedTxnCleaner.java:
##########
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.txn.compactor.handler;
+
+import org.apache.hadoop.hive.common.ValidReaderWriteIdList;
+import org.apache.hadoop.hive.common.ValidTxnList;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.metrics.MetricsConstants;
+import org.apache.hadoop.hive.metastore.metrics.PerfLogger;
+import org.apache.hadoop.hive.metastore.txn.AcidTxnInfo;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.hive.ql.txn.compactor.CompactorUtil;
+import org.apache.hadoop.hive.ql.txn.compactor.CompactorUtil.ThrowingRunnable;
+import org.apache.hadoop.hive.ql.txn.compactor.FSRemover;
+import org.apache.hadoop.hive.ql.txn.compactor.MetadataCache;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static java.util.Objects.isNull;
+
+/**
+ * Abort-cleanup based implementation of TaskHandler.
+ * Provides implementation of creation of abort clean tasks.
+ */
+class AbortedTxnCleaner extends TaskHandler {
+
+ private static final Logger LOG =
LoggerFactory.getLogger(AbortedTxnCleaner.class.getName());
+
+ public AbortedTxnCleaner(HiveConf conf, TxnStore txnHandler,
+ MetadataCache metadataCache, boolean metricsEnabled,
+ FSRemover fsRemover) {
+ super(conf, txnHandler, metadataCache, metricsEnabled, fsRemover);
+ }
+
+ /**
+ The following cleanup is based on the following idea - <br>
+ 1. Aborted cleanup is independent of compaction. This is because
directories which are written by
+ aborted txns are not visible by any open txns. It is only visible while
determining the AcidState (which
+ only sees the aborted deltas and does not read the file).<br><br>
+
+ The following algorithm is used to clean the set of aborted directories -
<br>
+ a. Find the list of entries which are suitable for cleanup (This is done
in {@link TxnStore#findReadyToCleanForAborts(long, int)}).<br>
+ b. If the table/partition does not exist, then remove the associated
aborted entry in TXN_COMPONENTS table. <br>
+ c. Get the AcidState of the table by using the min open txnID, database
name, tableName, partition name, highest write ID <br>
+ d. Fetch the aborted directories and delete the directories. <br>
+ e. Fetch the aborted write IDs from the AcidState and use it to delete
the associated metadata in the TXN_COMPONENTS table.
+ **/
+ @Override
+ public List<Runnable> getTasks() throws MetaException {
+ int abortedThreshold = HiveConf.getIntVar(conf,
+ HiveConf.ConfVars.HIVE_COMPACTOR_ABORTEDTXN_THRESHOLD);
+ long abortedTimeThreshold = HiveConf
+ .getTimeVar(conf,
HiveConf.ConfVars.HIVE_COMPACTOR_ABORTEDTXN_TIME_THRESHOLD,
+ TimeUnit.MILLISECONDS);
+ List<AcidTxnInfo> readyToCleanAborts =
txnHandler.findReadyToCleanForAborts(abortedTimeThreshold, abortedThreshold);
+
+ if (!readyToCleanAborts.isEmpty()) {
+ return readyToCleanAborts.stream().map(ci ->
ThrowingRunnable.unchecked(() ->
+ clean(ci, ci.txnId > 0 ? ci.txnId : Long.MAX_VALUE,
metricsEnabled)))
+ .collect(Collectors.toList());
+ }
+ return Collections.emptyList();
+ }
+
+ private void clean(AcidTxnInfo info, long minOpenTxn, boolean
metricsEnabled) throws MetaException {
+ LOG.info("Starting cleaning for {}", info);
+ PerfLogger perfLogger = PerfLogger.getPerfLogger(false);
+ String cleanerMetric = MetricsConstants.COMPACTION_CLEANER_CYCLE + "_";
+ try {
+ if (metricsEnabled) {
+ perfLogger.perfLogBegin(AbortedTxnCleaner.class.getName(),
cleanerMetric);
+ }
+ Table t;
Review Comment:
Done
Issue Time Tracking
-------------------
Worklog Id: (was: 857248)
Time Spent: 12h 10m (was: 12h)
> Implement a separate handler to handle aborted transaction cleanup
> ------------------------------------------------------------------
>
> Key: HIVE-27020
> URL: https://issues.apache.org/jira/browse/HIVE-27020
> Project: Hive
> Issue Type: Sub-task
> Reporter: Sourabh Badhya
> Assignee: Sourabh Badhya
> Priority: Major
> Labels: pull-request-available
> Time Spent: 12h 10m
> Remaining Estimate: 0h
>
> As described in the parent task, once the cleaner is separated into different
> entities, implement a separate handler which can create requests for aborted
> transactions cleanup. This would move the aborted transaction cleanup
> exclusively to the cleaner.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)