[
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978277#comment-16978277
]
Karen Coppage commented on HIVE-21266:
--------------------------------------
Created a unit test and manually tested the streaming api. In both scenarios,
after compaction and cleaning, the aborted transaction is still marked as
aborted. There is no issue.
I will contribute the unit test to prevent regression. Changing issue name to:
Unit test for potential issue with single delta file
> Issue with single delta file
> ----------------------------
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
> Issue Type: Sub-task
> Components: Transactions
> Affects Versions: 4.0.0
> Reporter: Eugene Koifman
> Assignee: Karen Coppage
> Priority: Major
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1)
> {
> LOG.debug("Not compacting {}; current base is {} and there are {}
> deltas and {} originals", sd.getLocation(), dir
> .getBaseDirectory(), deltaCount, origCount);
> return;
> }
> {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where
> {{txnid:13}} was aborted. The code above will not rewrite the delta (which
> drops anything that belongs to the aborted txn) and transition the compaction
> to "ready_for_cleaning" state which will drop the metadata about the aborted
> txn in {{markCleaned()}}. Now aborted data will come back as committed.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)