[
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445799#comment-16445799
]
Hive QA commented on HIVE-17657:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919865/HIVE-17657.01.patch
{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 70 failed/errored test(s), 14282 tests
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out)
(batchId=93)
[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed
out) (batchId=217)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart]
(batchId=256)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore]
(batchId=256)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_00_nonpart_empty]
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_01_nonpart]
(batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_02_part]
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_03_nonpart_over_compat]
(batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_04_all_part]
(batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_04_evolved_parts]
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_05_some_part]
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_06_one_part]
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_07_all_part_over_nonoverlap]
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_08_nonpart_rename]
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_09_part_spec_nonoverlap]
(batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_10_external_managed]
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_12_external_location]
(batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_13_managed_location]
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_14_managed_location_over_existing]
(batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_16_part_external]
(batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_17_part_managed]
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_19_00_part_external_location]
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_19_part_external_location]
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_20_part_managed_location]
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_22_import_exist_authsuccess]
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_23_import_part_authsuccess]
(batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_24_import_nonexist_authsuccess]
(batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_hidden_files]
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0]
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic]
(batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash]
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb]
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1]
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
(batchId=183)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[import_exported_table]
(batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5]
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
(batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
(batchId=98)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut
(batchId=225)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.TestTxnExIm.testImport (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testImportNoTarget (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testImportPartitioned (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testImportPartitionedCreate (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testImportPartitionedCreate2 (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testImportVectorized (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testMM (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testMMCreate (batchId=286)
org.apache.hadoop.hive.ql.TestTxnExIm.testMMFlatSource (batchId=286)
org.apache.hadoop.hive.ql.parse.TestExportImport.databaseTheTableIsImportedIntoShouldBeParsedFromCommandLine
(batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel
(batchId=235)
org.apache.hive.hcatalog.api.repl.commands.TestCommands.testNoopReplEximCommands
(batchId=193)
org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative
(batchId=254)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/10368/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10368/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10368/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 70 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12919865 - PreCommit-HIVE-Build
> export/import for MM tables is broken
> -------------------------------------
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
> Issue Type: Sub-task
> Components: Transactions
> Reporter: Eugene Koifman
> Assignee: Sergey Shelukhin
> Priority: Major
> Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it
> creates
> On import the txnids in the directory names would have to be remapped if
> importing to a different cluster. Perhaps export can be smart and export
> highest base_x and accretive deltas (minus aborted ones). Then import can
> ...? It would have to remap txn ids from the archive to new txn ids. This
> would then mean that import is made up of several transactions rather than 1
> atomic op. (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where
> start/end txn of file name is the same) and commit all of them at once (need
> new TMgr API for that). This assumes using a shared lock (if any!) and thus
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate? If we stipulate
> that this must mean that there is no delta_6_6 or any other "obsolete" delta
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade
> etc) and use that to make the above atomic.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)