[ 
https://issues.apache.org/jira/browse/SPARK-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14219292#comment-14219292
 ] 

sam commented on SPARK-4492:
----------------------------

Well I do try to package Spark in my jar (had to fight an epic deduplicate 
battle with sbt), it's the only way I can get syntax highlighting for Spark 
stuff to work in Intellij (rather than using "provided"). Why is Spark so 
special? I've used a lot of client libraries and I have not needed to use some 
special script to execute jars.  Historically I've just built a fat jar, 
deploy, run with java -cp.

What exactly does spark-submit do? The documentation says it sets up the 
classpath, is that it?

I can get it working with spark-submit, but it just doesn't feel right to me 
using a script to execute a jar.  What happens when new shiny tool X comes out, 
and new shiny tool X is also a nightmare to build, so the authors decide to 
solve the problem by writing a script to execute the jar called run-X-jar ... 
then I won't be able to use both X and Spark.

> Exception when following SimpleApp tutorial java.lang.ClassNotFoundException: 
> org.apache.spark.deploy.yarn.YarnSparkHadoopUtil
> ------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-4492
>                 URL: https://issues.apache.org/jira/browse/SPARK-4492
>             Project: Spark
>          Issue Type: Bug
>            Reporter: sam
>
> When I follow the example here 
> https://spark.apache.org/docs/1.0.2/quick-start.html and run with "java -cp 
> my.jar my.main.Class" with master set to "yarn-client" I get the below 
> exception.
> Exception in thread "main" java.lang.ExceptionInInitializerError
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
>       at com.barclays.SimpleApp$.main(SimpleApp.scala:11)
>       at com.barclays.SimpleApp.main(SimpleApp.scala)
> Caused by: org.apache.spark.SparkException: Unable to load YARN support
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:106)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:101)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
>       ... 3 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.spark.deploy.yarn.YarnSparkHadoopUtil
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>       at java.lang.Class.forName0(Native Method)
>       at java.lang.Class.forName(Class.java:169)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:102)
>       ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to