[ 
https://issues.apache.org/jira/browse/HIVE-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092933#comment-15092933
 ] 

Eugene Koifman commented on HIVE-12352:
---------------------------------------

yes, you are right, the HWM should apply to all tables

wrt 2nd comment:
Compactions find the smallest open txn in the working set and make sure to only 
compact up to that txn (exclusive). so if there are any files that include txn 
ids between minOpenTxn and ValidCompactorTxnList.highWatermark they would be 
ignored.  This txn ID is also the HWM mark for the issue in this bug.  (Longer 
term ValidCompactorTxnList can be refactored w/o minOpenTxn but in the short 
run this is the simplest way to pass compaction HWM to Worker)



> CompactionTxnHandler.markCleaned() may delete too much
> ------------------------------------------------------
>
>                 Key: HIVE-12352
>                 URL: https://issues.apache.org/jira/browse/HIVE-12352
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 1.0.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>            Priority: Blocker
>         Attachments: HIVE-12352.patch
>
>
>    Worker will start with DB in state X (wrt this partition).
>    while it's working more txns will happen, against partition it's 
> compacting.
>    then this will delete state up to X and since then.  There may be new 
> delta files created
>    between compaction starting and cleaning.  These will not be compacted 
> until more
>    transactions happen.  So this ideally should only delete
>    up to TXN_ID that was compacted (i.e. HWM in Worker?)  Then this can also 
> run
>    at READ_COMMITTED.  So this means we'd want to store HWM in 
> COMPACTION_QUEUE when
>    Worker picks up the job.
> Actually the problem is even worse (but also solved using HWM as above):
> Suppose some transactions (against same partition) have started and aborted 
> since the time Worker ran compaction job.
> That means there are never-compacted delta files with data that belongs to 
> these aborted txns.
> Following will pick up these aborted txns.
> s = "select txn_id from TXNS, TXN_COMPONENTS where txn_id = tc_txnid and 
> txn_state = '" +
>           TXN_ABORTED + "' and tc_database = '" + info.dbname + "' and 
> tc_table = '" +
>           info.tableName + "'";
>         if (info.partName != null) s += " and tc_partition = '" + 
> info.partName + "'";
> The logic after that will delete relevant data from TXN_COMPONENTS and if one 
> of these txns becomes empty, it will be picked up by cleanEmptyAbortedTxns(). 
>  At that point any metadata about an Aborted txn is gone and the system will 
> think it's committed.
> HWM in this case would be (in ValidCompactorTxnList)
> if(minOpenTxn > 0)
> min(highWaterMark, minOpenTxn) 
> else 
> highWaterMark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to