[ 
https://issues.apache.org/jira/browse/SPARK-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375162#comment-14375162
 ] 

Sean Owen commented on SPARK-4941:
----------------------------------

I tried a simplified version of this with {{spark-shell}}:

{code}
spark-shell --master yarn --num-executors 3 --driver-memory 1G --jars 
android-core.jar,core.jar,javase.jar
{code}

and it worked as expected. I was able to access code in all the JARs.

The JARs were added too:

{code}
15/03/22 13:53:53 INFO SparkContext: Added JAR 
file:/home/srowen/android-core.jar at 
http://10.16.180.26:58311/jars/android-core.jar with timestamp 1427057633025
15/03/22 13:53:53 INFO SparkContext: Added JAR file:/home/srowen/core.jar at 
http://10.16.180.26:58311/jars/core.jar with timestamp 1427057633027
15/03/22 13:53:53 INFO SparkContext: Added JAR file:/home/srowen/javase.jar at 
http://10.16.180.26:58311/jars/javase.jar with timestamp 1427057633028
{code}

Are you sure there's not a typo in your command line ? you have several JAR 
names in the middle of arguments, and looks like one arg doesn't end with a 
space before the newline continuation (see before {{files}}).

> Yarn cluster mode does not upload all needed jars to driver node (Spark 1.2.0)
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-4941
>                 URL: https://issues.apache.org/jira/browse/SPARK-4941
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>            Reporter: Gurpreet Singh
>
> I am specifying additional jars and config xml file with --jars and --files 
> option to be uploaded to driver in the following spark-submit command. 
> However they are not getting uploaded.
> This results in the the job failure. It was working in spark 1.0.2 build.
> Spark-Build being used (spark-1.2.0.tgz)
> ========
> $SPARK_HOME/bin/spark-submit \
> --class com.ebay.inc.scala.testScalaXML \
> --driver-class-path 
> /apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-XXXX-2.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar:/apache/hadoop/share/hadoop/common/lib/hadoop-xxxx-0.1-XXXX-2.jar:/apache/hive/lib/mysql-connector-java-5.0.8-bin.jar:/apache/hadoop/share/hadoop/common/lib/guava-11.0.2.jar
>  \
> --master yarn \
> --deploy-mode cluster \
> --num-executors 3 \
> --driver-memory 1G  \
> --executor-memory 1G \
> /export/home/b_incdata_rw/gurpreetsingh/jar/testscalaxml_2.11-1.0.jar 
> /export/home/b_incdata_rw/gurpreetsingh/sqlFramework.xml next_gen_linking \
> --queue hdmi-spark \
> --jars 
> /export/home/b_incdata_rw/gurpreetsingh/jar/datanucleus-api-jdo-3.2.1.jar,/export/home/b_incdata_rw/gurpreetsingh/jar/datanucleus-core-3.2.2.jar,/export/home/b_incdata_rw/gurpreetsingh/jar/datanucleus-rdbms-3.2.1.jar,/apache/hive/lib/mysql-connector-java-5.0.8-bin.jar,/apache/hadoop/share/hadoop/common/lib/hadoop-xxxx-0.1-XXXX-2.jar,/apache/hadoop/share/hadoop/common/lib/hadoop-lzo-0.6.0.jar,/apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-XXXX-2.jar\
> --files 
> /export/home/b_incdata_rw/gurpreetsingh/spark-1.0.2-bin-2.4.1/conf/hive-site.xml
> Spark assembly has been built with Hive, including Datanucleus jars on 
> classpath
> 14/12/22 23:00:17 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 14/12/22 23:00:17 INFO yarn.Client: Requesting a new application from cluster 
> with 2026 NodeManagers
> 14/12/22 23:00:17 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (16384 MB 
> per container)
> 14/12/22 23:00:17 INFO yarn.Client: Will allocate AM container, with 1408 MB 
> memory including 384 MB overhead
> 14/12/22 23:00:17 INFO yarn.Client: Setting up container launch context for 
> our AM
> 14/12/22 23:00:17 INFO yarn.Client: Preparing resources for our AM container
> 14/12/22 23:00:18 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 14/12/22 23:00:18 WARN hdfs.BlockReaderLocal: The short-circuit local reads 
> feature cannot be used because libhadoop cannot be loaded.
> 14/12/22 23:00:21 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 6623380 for b_incdata_rw on 10.115.201.75:8020
> 14/12/22 23:00:21 INFO yarn.Client: 
> Uploading resource 
> file:/home/b_incdata_rw/gurpreetsingh/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar
>  -> 
> hdfs://xxxx-nn.vip.xxx.com:8020/user/b_incdata_rw/.sparkStaging/application_1419242629195_8432/spark-assembly-1.2.0-hadoop2.4.0.jar
> 14/12/22 23:00:24 INFO yarn.Client: Uploading resource 
> file:/export/home/b_incdata_rw/gurpreetsingh/jar/firstsparkcode_2.11-1.0.jar 
> -> 
> hdfs://xxxx-nn.vip.xxx.com:8020:8020/user/b_incdata_rw/.sparkStaging/application_1419242629195_8432/firstsparkcode_2.11-1.0.jar
> 14/12/22 23:00:25 INFO yarn.Client: Setting up the launch environment for our 
> AM container



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to