[
https://issues.apache.org/jira/browse/HIVE-18530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340578#comment-16340578
]
Hive QA commented on HIVE-18530:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907590/HIVE-18530.1.patch
{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11661 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2]
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
(batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb]
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
(batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid]
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb]
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
(batchId=162)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
(batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5]
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=186)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8842/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8842/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8842/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12907590 - PreCommit-HIVE-Build
> Replication should skip MM table (for now)
> ------------------------------------------
>
> Key: HIVE-18530
> URL: https://issues.apache.org/jira/browse/HIVE-18530
> Project: Hive
> Issue Type: Bug
> Components: repl
> Reporter: Daniel Dai
> Assignee: Daniel Dai
> Priority: Major
> Attachments: HIVE-18530.1.patch
>
>
> Currently replication cannot handle transactional table (including MM table)
> until HIVE-18320. HIVE-17504 skips table with transactional=true explicitly.
> HIVE-18352 changes the logic to use AcidUtils.isAcidTable for the same
> purpose. However, isAcidTable returns false for mm table, thus Hive still
> dump mm table during replication.
> Here is an error message during dump mm table:
> {code}
> ERROR : FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.
> java.io.FileNotFoundException: Path is not a file:
> /apps/hive/warehouse/testrepldb5.db/test1/delta_0000261_0000261_0000
> at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:89)
> at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1920)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:731)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:424)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> INFO : Completed executing
> command(queryId=hive_20180119203438_293813df-7630-47fa-bc30-5ef7cbb42842);
> Time taken: 1.119 seconds
> Error: Error while processing statement: FAILED: Execution Error, return code
> 1 from org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.
> java.io.FileNotFoundException: Path is not a file:
> /apps/hive/warehouse/testrepldb5.db/test1/delta_0000261_0000261_0000
> at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:89)
> at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1920)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:731)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:424)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> (state=08S01,code=1)
> 0: jdbc:hive2://ctr-e137-1514896590304-25219-> Closing: 0:
> jdbc:hive2://ctr-e137-1514896590304-25219-02-000005.hwx.site:2181,ctr-e137-1514896590304-25219-02-000012.hwx.site:2181,ctr-e137-1514896590304-25219-02-000009.hwx.site:2181,ctr-e137-1514896590304-25219-02-000004.hwx.site:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/[email protected]
> {code}
> We shall switch to use AcidUtils.isTransactionalTable.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)