[
https://issues.apache.org/jira/browse/SPARK-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127692#comment-15127692
]
Tao Li commented on SPARK-13129:
--------------------------------
To open hive streaming data ingest feature, I change metastore hive.tx.manager
to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.
When I start spark-sql command line and enter a simple sql command "use mydb;".
It throws the following Exception:
FAILED: RuntimeException [Error 10264]: To use DbTxnManager you must set
hive.support.concurrency=true
16/02/02 13:19:14 ERROR ql.Driver: FAILED: RuntimeException [Error 10264]: To
use DbTxnManager you must set hive.support.concurrency=true
java.lang.RuntimeException: To use DbTxnManager you must set
hive.support.concurrency=true
at
org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.setHiveConf(DbTxnManager.java:63)
at
org.apache.hadoop.hive.ql.lockmgr.TxnManagerFactory.getTxnManager(TxnManagerFactory.java:72)
at
org.apache.hadoop.hive.ql.session.SessionState.initTxnMgr(SessionState.java:395)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:405)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:429)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:418)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:256)
at
org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:211)
at
org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:248)
at
org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:418)
at
org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:408)
at
org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:557)
at
org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:69)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:933)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:933)
> Spark SQL can't query hive table, which is create by Hive HCatalog Streaming
> API
> ---------------------------------------------------------------------------------
>
> Key: SPARK-13129
> URL: https://issues.apache.org/jira/browse/SPARK-13129
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.6.0
> Environment: hadoop version: 2.5.0-cdh5.3.2
> hive version: 0.13.1
> spark version: 1.6.0
> Reporter: Tao Li
> Labels: hive, orc, sparksql
>
> I create a Hive table using Hive HCatalog Streaming API.
> https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest
> The hive table is streaming data ingested by flume hive sink. And I can query
> the hive table using hive command line.
> But I can't query the hive table using spark-sql command line. Is it spark
> sql's bug or a unimplemented feature?
> The hive storage file is ORC format with ACID support.
> http://orc.apache.org/docs/acid.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]