[
https://issues.apache.org/jira/browse/HIVE-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064789#comment-15064789
]
Hive QA commented on HIVE-12708:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12778540/HIVE-12708.1-spark.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9866 tests executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}
Test results:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/1022/testReport
Console output:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/1022/console
Test logs:
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-1022/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12778540 - PreCommit-HIVE-SPARK-Build
> Hive on Spark doesn't work with Kerboresed HBase [Spark Branch]
> ---------------------------------------------------------------
>
> Key: HIVE-12708
> URL: https://issues.apache.org/jira/browse/HIVE-12708
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Affects Versions: 1.2.0, 1.1.0, 2.0.0
> Reporter: Xuefu Zhang
> Assignee: Xuefu Zhang
> Attachments: HIVE-12708.1-spark.patch
>
>
> Spark application launcher (spark-submit) acquires HBase delegation token on
> Hive user's behalf when the application is launched. This mechanism, which
> doesn't work for long-running sessions, is not in line with what Hive is
> doing. Hive actually acquires the token automatically whenever a job needs
> it. The right approach for Spark should be allowing applications to
> dynamically add whatever tokens they need to the spark context. While this
> needs work on Spark side, we provide a workaround solution in Hive.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)