[jira] [Commented] (LIVY-638) get sql.AnalysisException when create table using thriftserver

2019-08-15 Thread mingchao zhao (JIRA)


[ 
https://issues.apache.org/jira/browse/LIVY-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908645#comment-16908645
 ] 

mingchao zhao commented on LIVY-638:


In livy.conf, there is a configuration item of livy.repl.enable-hive-context, 
with the default false. In InteractiveSession.scala, it is defined as follows:
{code:java}
val confVal = if (enableHiveContext) "hive" else "in-memory" 
builderProperties.put("spark.sql.catalogImplementation", confVal)
{code}
So just set livy.repl.enable-hive-context to true
to enable the spark.sql.catalogImplementation = hive

> get sql.AnalysisException when create table using thriftserver
> --
>
> Key: LIVY-638
> URL: https://issues.apache.org/jira/browse/LIVY-638
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: mingchao zhao
>Priority: Major
> Attachments: create table.png
>
>
> org.apache.spark.sql.AnalysisExceptionoccurs when I use thriftserver to 
> execute the following SQL. When I do not use hive as metastore, thriftserver 
> does not support create table ?
> 0: jdbc:hive2://localhost:10090> CREATE TABLE test(key INT, val STRING);
>  Error: java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.spark.sql.AnalysisException: Hive support is required to CREATE 
> Hive TABLE (AS SELECT);;
>  'CreateTable `test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> ErrorIfExists
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:392)
>  
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:390)
>  org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:117)
>  
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:390)
>  
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:388)
>  
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
>  
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
>  
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:386)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
>  
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
>  
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
>  
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
>  
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
>  org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
>  org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
>  org.apache.livy.thriftserver.session.SqlJob.executeSql(SqlJob.java:74)
>  org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:64)
>  org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:35)
>  org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:64)
>  org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:31)
>  java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748) (state=,code=0)
>  0: jdbc:hive2://localhost:10090>
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (LIVY-638) get sql.AnalysisException when create table using thriftserver

2019-08-14 Thread Yiheng Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/LIVY-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907028#comment-16907028
 ] 

Yiheng Wang commented on LIVY-638:
--

It's weird that even I enable hive support in spark configuration, livy thrift 
server will still throw this exception... 

> get sql.AnalysisException when create table using thriftserver
> --
>
> Key: LIVY-638
> URL: https://issues.apache.org/jira/browse/LIVY-638
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: mingchao zhao
>Priority: Major
> Attachments: create table.png
>
>
> org.apache.spark.sql.AnalysisExceptionoccurs when I use thriftserver to 
> execute the following SQL. When I do not use hive as metastore, thriftserver 
> does not support create table ?
> 0: jdbc:hive2://localhost:10090> CREATE TABLE test(key INT, val STRING);
>  Error: java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.spark.sql.AnalysisException: Hive support is required to CREATE 
> Hive TABLE (AS SELECT);;
>  'CreateTable `test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> ErrorIfExists
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:392)
>  
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:390)
>  org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:117)
>  
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:390)
>  
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:388)
>  
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
>  
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
>  
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:386)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
>  
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
>  
> org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
>  
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
>  
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
>  
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
>  org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
>  org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
>  org.apache.livy.thriftserver.session.SqlJob.executeSql(SqlJob.java:74)
>  org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:64)
>  org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:35)
>  org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:64)
>  org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:31)
>  java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748) (state=,code=0)
>  0: jdbc:hive2://localhost:10090>
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)