[
https://issues.apache.org/jira/browse/HIVE-24481?focusedWorklogId=522710&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-522710
]
ASF GitHub Bot logged work on HIVE-24481:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 10/Dec/20 13:38
Start Date: 10/Dec/20 13:38
Worklog Time Spent: 10m
Work Description: deniskuzZ commented on a change in pull request #1738:
URL: https://github.com/apache/hive/pull/1738#discussion_r540174463
##########
File path:
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java
##########
@@ -1177,6 +1177,104 @@ private HiveStreamingConnection
prepareTableTwoPartitionsAndConnection(String db
.connect();
}
+ /**
+ * There is a special case handled in Compaction Worker that will skip
compaction
+ * if there is only one valid delta. But this compaction will be still
cleaned up, if there are aborted directories.
+ * @see Worker.isEnoughToCompact
+ * However if no compaction was done, deltas containing mixed aborted /
committed writes from streaming can not be cleaned
+ * and the metadata belonging to those aborted transactions can not be
removed.
+ * @throws Exception ex
+ */
+ @Test
+ public void testSkippedCompactionCleanerKeepsAborted() throws Exception {
+ String dbName = "default";
+ String tblName = "cws";
+
+ String agentInfo = "UT_" + Thread.currentThread().getName();
+ TxnStore txnHandler = TxnUtils.getTxnStore(conf);
+
+ executeStatementOnDriver("drop table if exists " + tblName, driver);
+ executeStatementOnDriver("CREATE TABLE " + tblName + "(b STRING) " +
+ " PARTITIONED BY (a INT) STORED AS ORC TBLPROPERTIES
('transactional'='true')", driver);
+ executeStatementOnDriver("alter table " + tblName + " add partition(a=1)",
driver);
+
+ StrictDelimitedInputWriter writer = StrictDelimitedInputWriter.newBuilder()
+ .withFieldDelimiter(',')
+ .build();
+
+ // Create initial aborted txn
+ HiveStreamingConnection connection = HiveStreamingConnection.newBuilder()
+ .withDatabase(dbName)
+ .withTable(tblName)
+ .withStaticPartitionValues(Collections.singletonList("1"))
+ .withAgentInfo(agentInfo)
+ .withHiveConf(conf)
+ .withRecordWriter(writer)
+ .withStreamingOptimizations(true)
+ .withTransactionBatchSize(1)
+ .connect();
+
+ connection.beginTransaction();
+ connection.write("3,1".getBytes());
+ connection.write("4,1".getBytes());
+ connection.abortTransaction();
+
+ connection.close();
+
+ // Create a sequence of commit, abort, commit to the same delta folder
+ connection = HiveStreamingConnection.newBuilder()
+ .withDatabase(dbName)
+ .withTable(tblName)
+ .withStaticPartitionValues(Collections.singletonList("1"))
+ .withAgentInfo(agentInfo)
+ .withHiveConf(conf)
+ .withRecordWriter(writer)
+ .withStreamingOptimizations(true)
+ .withTransactionBatchSize(3)
+ .connect();
+
+ connection.beginTransaction();
+ connection.write("1,1".getBytes());
+ connection.write("2,1".getBytes());
+ connection.commitTransaction();
+
+ connection.beginTransaction();
+ connection.write("3,1".getBytes());
+ connection.write("4,1".getBytes());
+ connection.abortTransaction();
+
+ connection.beginTransaction();
+ connection.write("5,1".getBytes());
+ connection.write("6,1".getBytes());
+ connection.commitTransaction();
+
+ connection.close();
+
+ // Check that aborted are not read back
+ driver.run("select * from cws");
+ List res = new ArrayList();
+ driver.getFetchTask().fetch(res);
+ Assert.assertEquals(4, res.size());
+
+ int count = TxnDbUtil.countQueryAgent(conf, "select count(*) from
TXN_COMPONENTS");
+ Assert.assertEquals("There should be 2 record for two aborted
transaction", 2, count);
+
+ // Start a compaction, that will be skipped, because only one valid delta
is there
+ driver.run("alter table cws partition(a='1') compact 'minor'");
+ runWorker(conf);
+ // Cleaner should not delete info about aborted txn 2
+ runCleaner(conf);
+ txnHandler.cleanEmptyAbortedAndCommittedTxns();
+ count = TxnDbUtil.countQueryAgent(conf, "select count(*) from
TXN_COMPONENTS");
+ Assert.assertEquals("There should be 1 record for two aborted
transaction", 1, count);
Review comment:
there should be single record for the 2nd aborted txn
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 522710)
Time Spent: 1h (was: 50m)
> Skipped compaction can cause data corruption with streaming
> -----------------------------------------------------------
>
> Key: HIVE-24481
> URL: https://issues.apache.org/jira/browse/HIVE-24481
> Project: Hive
> Issue Type: Bug
> Reporter: Peter Varga
> Assignee: Peter Varga
> Priority: Major
> Labels: Compaction, pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> Timeline:
> 1. create a partitioned table, add one static partition
> 2. transaction 1 writes delta_1, and aborts
> 3. create streaming connection, with batch 3, withStaticPartitionValues with
> the existing partition
> 4. beginTransaction, write, commitTransaction
> 5. beginTransaction, write, abortTransaction
> 6. beingTransaction, write, commitTransaction
> 7. close connection, count of the table is 2
> 8. run manual minor compaction on the partition. it will skip compaction,
> because deltacount =1 but clean, because there is aborted txn1
> 9. cleaner will remove both aborted record from txn_components
> 10. wait for acidhousekeeper to remove empty aborted txns
> 11. select * from table return *3* records, reading the aborted record
--
This message was sent by Atlassian Jira
(v8.3.4#803005)