[ 
https://issues.apache.org/jira/browse/SPARK-22670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-22670:
---------------------------------
    Target Version/s:   (was: 2.1.1)

> Not able to create table in HIve with SparkSession when JavaSparkContext is 
> already initialized.
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-22670
>                 URL: https://issues.apache.org/jira/browse/SPARK-22670
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: Naresh Meena
>            Priority: Blocker
>             Fix For: 2.1.1
>
>
> Not able to create table in Hive with SparkSession when SparkContext is 
> already initialized.
> Below is the code snippet and error logs.
> JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
> SparkSession hiveCtx = SparkSession
>                               .builder()
>                               
> .config(HiveConf.ConfVars.METASTOREURIS.toString(),
>                                               "..:9083")
>                               .config("spark.sql.warehouse.dir", 
> "/apps/hive/warehouse")
>                               .enableHiveSupport().getOrCreate();
> 2017-11-29 13:11:33 Driver [ERROR] SparkBatchSubmitter - Failed to start the 
> driver for Batch_JDBC_PipelineTest
> org.apache.spark.sql.AnalysisException: 
> Hive support is required to insert into the following tables:
> `default`.`testhivedata`
>                ;;
> 'InsertIntoTable 'SimpleCatalogRelation default, CatalogTable(
>       Table: `default`.`testhivedata`
>       Created: Wed Nov 29 13:11:33 IST 2017
>       Last Access: Thu Jan 01 05:29:59 IST 1970
>       Type: MANAGED
>       Schema: [StructField(empID,LongType,true), 
> StructField(empDate,DateType,true), StructField(empName,StringType,true), 
> StructField(empSalary,DoubleType,true), 
> StructField(empLocation,StringType,true), 
> StructField(empConditions,BooleanType,true), 
> StructField(empCity,StringType,true), 
> StructField(empSystemIP,StringType,true)]
>       Provider: hive
>       Storage(Location: 
> file:/hadoop/yarn/local/usercache/sax/appcache/application_1511627000183_0190/container_e34_1511627000183_0190_01_000001/spark-warehouse/testhivedata,
>  InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)), 
> OverwriteOptions(false,Map()), false
> +- LogicalRDD [empID#49L, empDate#50, empName#51, empSalary#52, 
> empLocation#53, empConditions#54, empCity#55, empSystemIP#56]
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:39)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:57)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:405)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:76)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:128)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:76)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:57)
>       at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:52)
>       at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData$lzycompute(QueryExecution.scala:73)
>       at 
> org.apache.spark.sql.execution.QueryExecution.withCachedData(QueryExecution.scala:72)
>       at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
>       at 
> org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
>       at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
>       at 
> org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
>       at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
>       at 
> org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
>       at 
> org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:263)
>       at 
> org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:243)
>       at 
> com.streamanalytix.spark.processor.HiveEmitter.persistRDDToHive(HiveEmitter.java:690)
>       at 
> com.streamanalytix.spark.processor.HiveEmitter.executeWithRDD(HiveEmitter.java:395)
>       at 
> com.streamanalytix.spark.core.AbstractProcessor.processRDDMap(AbstractProcessor.java:227)
>       at 
> com.streamanalytix.spark.core.pipeline.SparkBatchSubmitter.definePipelineFlow(SparkBatchSubmitter.java:353)
>       at 
> com.streamanalytix.spark.core.pipeline.SparkBatchSubmitter.getContext(SparkBatchSubmitter.java:302)
>       at 
> com.streamanalytix.spark.core.pipeline.SparkBatchSubmitter.submit(SparkBatchSubmitter.java:93)
>       at com.streamanalytix.deploy.SaxDriver.main(SaxDriver.java:34)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
> 2017-11-29 13:11:33 Driver [INFO ] TopologyHelper - Inside 
> updatePipelineStatus



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to