[ 
https://issues.apache.org/jira/browse/SPARK-16680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391701#comment-15391701
 ] 

KaiXinXIaoLei commented on SPARK-16680:
---------------------------------------

[~dongjoon] hi. I have tesed , there is still problem in yarn-cluster mode.  My 
application is :
    val sparkConf = new SparkConf().setAppName("FemaleInfo")
    val sc = new SparkContext(sparkConf)
    val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
    sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
   sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)

There is a error log: 
16/07/25 18:48:05 ERROR yarn.ApplicationMaster: User class threw exception: 
java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: 
org/apache/hadoop/hive/shims/ShimLoader when creating Hive clien
t using classpath: 
file:/opt/apache/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1469442803590_0003/container_1469442803590_0003_01_000001/__app__.jar,
 file:/opt/apache/hadoop/tmp/nm-local-di
r/usercache/root/appcache/application_1469442803590_0003/container_1469442803590_0003_01_000001/datanucleus-api-jdo-3.2.6.jar,
 file:/opt/apache/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_14
69442803590_0003/container_1469442803590_0003_01_000001/datanucleus-core-3.2.10.jar,
 
file:/opt/apache/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1469442803590_0003/container_1469442803590_0
003_01_000001/datanucleus-rdbms-3.2.9.jar
Please make sure that jars for your version of hive and hadoop are included in 
the paths passed to SQLConfEntry(key = spark.sql.hive.metastore.jars, 
defaultValue=builtin, doc=
 Location of the jars that should be used to instantiate the 
HiveMetastoreClient.
 This property can be one of three options: "
 1. "builtin"
   Use Hive 1.2.1, which is bundled with the Spark assembly jar when
   <code>-Phive</code> is enabled. When this option is chosen,
   <code>spark.sql.hive.metastore.version</code> must be either
   <code>1.2.1</code> or not defined.
 2. "maven"
   Use Hive jars of specified version downloaded from Maven repositories.
 3. A classpath in the standard format for both Hive and Hadoop.
    , isPublic = true).

> Set spark.driver.userClassPathFirst=true, and run spark-sql failed
> ------------------------------------------------------------------
>
>                 Key: SPARK-16680
>                 URL: https://issues.apache.org/jira/browse/SPARK-16680
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: KaiXinXIaoLei
>
> There is an exception when I run “sh bin/spark-sql  --conf 
> spark.driver.userClassPathFirst=true” 
> 16/07/22 15:54:54 INFO HiveSharedState: Warehouse path is 
> 'file:/home/hll/code/spark-master-0722/spark-master/spark-warehouse'.
> Exception in thread "main" java.lang.IllegalArgumentException: Unable to 
> locate hive jars to connect to metastore. Please set 
> spark.sql.hive.metastore.jars.
>         at 
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:299)
>         at 
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
>         at 
> org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
>         at 
> org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
>         at 
> org.apache.spark.sql.hive.HiveSessionState.metadataHive$lzycompute(HiveSessionState.scala:43)
>         at 
> org.apache.spark.sql.hive.HiveSessionState.metadataHive(HiveSessionState.scala:43)
>         at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:62)
>         at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:288)
>         at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:137)
>         at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
>         at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to