[
https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16461089#comment-16461089
]
Hive QA commented on HIVE-18988:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12921540/HIVE-18988.01-branch-3.patch
{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 58 failed/errored test(s), 14143 tests
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out)
(batchId=253)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out)
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed
out) (batchId=253)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed
out) (batchId=217)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out)
(batchId=253)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parallel_join0]
(batchId=76)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb]
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1]
(batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
(batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_stats]
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5]
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
(batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
(batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
(batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[parallel_orderby]
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_non_nullable_union]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[cachingprintstream]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[compute_stats_long]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part3]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part_max_per_node]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dynamic_partitions_with_whitelist]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe2]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe3]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_error]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[serde_regex2]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true2]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error]
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error_reduce]
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[local_mapred_error_cache]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
(batchId=98)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut
(batchId=225)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
(batchId=242)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/10632/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10632/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10632/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 58 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12921540 - PreCommit-HIVE-Build
> Support bootstrap replication of ACID tables
> --------------------------------------------
>
> Key: HIVE-18988
> URL: https://issues.apache.org/jira/browse/HIVE-18988
> Project: Hive
> Issue Type: Sub-task
> Components: HiveServer2, repl
> Affects Versions: 3.0.0
> Reporter: Sankar Hariappan
> Assignee: Sankar Hariappan
> Priority: Major
> Labels: ACID, DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18988.01-branch-3.patch, HIVE-18988.01.patch,
> HIVE-18988.02.patch, HIVE-18988.03.patch, HIVE-18988.04.patch,
> HIVE-18988.05.patch, HIVE-18988.06.patch, HIVE-18988.07.patch
>
>
> Bootstrapping of ACID tables, need special handling to replicate a stable
> state of data.
> - If ACID feature enables, then perform bootstrap dump for ACID tables with
> in read txn.
> -> Dump table/partition metadata.
> -> Get the list of valid data files for a table using same logic as read txn
> do.
> -> Dump latest ValidWriteIdList as per current read txn.
> - Set the valid last replication state such that it doesn't miss any open
> txn started after triggering bootstrap dump.
> - If any txns on-going which was opened before triggering bootstrap dump,
> then it is not guaranteed that if open_txn event captured for these txns.
> Also, if these txns are opened for streaming ingest case, then dumped ACID
> table data may include data of open txns which impact snapshot isolation at
> target. To avoid that, bootstrap dump should wait for timeout (new
> configuration: hive.repl.bootstrap.dump.open.txn.timeout). After timeout,
> just force abort those txns and continue.
> - If any txns force aborted belongs to a streaming ingest case, then dumped
> ACID table data may have aborted data too. So, it is necessary to replicate
> the aborted write ids to target to mark those data invalid for any readers.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)