[
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414952#comment-16414952
]
Hive QA commented on HIVE-18885:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12916288/HIVE-18885.04-branch-2.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 10667 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
(batchId=227)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_tableproperty_optimize]
(batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explaindenpendencydiffengs]
(batchId=38)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb]
(batchId=142)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic]
(batchId=139)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[table_nonprintable]
(batchId=140)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid]
(batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
(batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_parquet_types]
(batchId=155)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[merge_negative_5]
(batchId=88)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[explaindenpendencydiffengs]
(batchId=115)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_input_format_excludes]
(batchId=117)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_ptf]
(batchId=125)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=176)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9858/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9858/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9858/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12916288 - PreCommit-HIVE-Build
> DbNotificationListener has a deadlock between Java and DB locks (2.x line)
> --------------------------------------------------------------------------
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
> Issue Type: Bug
> Components: Hive, Metastore
> Affects Versions: 2.3.2
> Reporter: Alexander Kolbasov
> Assignee: Vihang Karajgaonkar
> Priority: Major
> Attachments: HIVE-18885.01.branch-2.patch,
> HIVE-18885.02.branch-2.patch, HIVE-18885.03-branch-2.patch,
> HIVE-18885.04-branch-2.patch
>
>
> You can see the problem from looking at the code, but it actually created
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
> msdb.openTransaction()
> ...
> List<Partition> parts = msdb.getPartitions(dbname, name, -1);
> for (Partition part : parts) {
> List<FieldSchema> oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName =
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName,
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
> }
> {code}
> So it walks all partitions (and this may be huge list) and does some
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition,
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is
> happening no other write DDL can proceed. This can sometimes cause DB lock
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)