[ https://issues.apache.org/jira/browse/SPARK-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-4941. ------------------------------ Resolution: Cannot Reproduce OK, we can reopen if this if typos etc are ruled out, and it is reproducible vs at least 1.3.0. > Yarn cluster mode does not upload all needed jars to driver node (Spark 1.2.0) > ------------------------------------------------------------------------------ > > Key: SPARK-4941 > URL: https://issues.apache.org/jira/browse/SPARK-4941 > Project: Spark > Issue Type: Bug > Components: YARN > Reporter: Gurpreet Singh > > I am specifying additional jars and config xml file with --jars and --files > option to be uploaded to driver in the following spark-submit command. > However they are not getting uploaded. > This results in the the job failure. It was working in spark 1.0.2 build. > Spark-Build being used (spark-1.2.0.tgz) > ======== > $SPARK_HOME/bin/spark-submit \ > --class com.ebay.inc.scala.testScalaXML \ > --driver-class-path > /apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-XXXX-2.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar:/apache/hadoop/share/hadoop/common/lib/hadoop-xxxx-0.1-XXXX-2.jar:/apache/hive/lib/mysql-connector-java-5.0.8-bin.jar:/apache/hadoop/share/hadoop/common/lib/guava-11.0.2.jar > \ > --master yarn \ > --deploy-mode cluster \ > --num-executors 3 \ > --driver-memory 1G \ > --executor-memory 1G \ > /export/home/b_incdata_rw/gurpreetsingh/jar/testscalaxml_2.11-1.0.jar > /export/home/b_incdata_rw/gurpreetsingh/sqlFramework.xml next_gen_linking \ > --queue hdmi-spark \ > --jars > /export/home/b_incdata_rw/gurpreetsingh/jar/datanucleus-api-jdo-3.2.1.jar,/export/home/b_incdata_rw/gurpreetsingh/jar/datanucleus-core-3.2.2.jar,/export/home/b_incdata_rw/gurpreetsingh/jar/datanucleus-rdbms-3.2.1.jar,/apache/hive/lib/mysql-connector-java-5.0.8-bin.jar,/apache/hadoop/share/hadoop/common/lib/hadoop-xxxx-0.1-XXXX-2.jar,/apache/hadoop/share/hadoop/common/lib/hadoop-lzo-0.6.0.jar,/apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-XXXX-2.jar\ > --files > /export/home/b_incdata_rw/gurpreetsingh/spark-1.0.2-bin-2.4.1/conf/hive-site.xml > Spark assembly has been built with Hive, including Datanucleus jars on > classpath > 14/12/22 23:00:17 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm2 > 14/12/22 23:00:17 INFO yarn.Client: Requesting a new application from cluster > with 2026 NodeManagers > 14/12/22 23:00:17 INFO yarn.Client: Verifying our application has not > requested more than the maximum memory capability of the cluster (16384 MB > per container) > 14/12/22 23:00:17 INFO yarn.Client: Will allocate AM container, with 1408 MB > memory including 384 MB overhead > 14/12/22 23:00:17 INFO yarn.Client: Setting up container launch context for > our AM > 14/12/22 23:00:17 INFO yarn.Client: Preparing resources for our AM container > 14/12/22 23:00:18 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 14/12/22 23:00:18 WARN hdfs.BlockReaderLocal: The short-circuit local reads > feature cannot be used because libhadoop cannot be loaded. > 14/12/22 23:00:21 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token > 6623380 for b_incdata_rw on 10.115.201.75:8020 > 14/12/22 23:00:21 INFO yarn.Client: > Uploading resource > file:/home/b_incdata_rw/gurpreetsingh/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar > -> > hdfs://xxxx-nn.vip.xxx.com:8020/user/b_incdata_rw/.sparkStaging/application_1419242629195_8432/spark-assembly-1.2.0-hadoop2.4.0.jar > 14/12/22 23:00:24 INFO yarn.Client: Uploading resource > file:/export/home/b_incdata_rw/gurpreetsingh/jar/firstsparkcode_2.11-1.0.jar > -> > hdfs://xxxx-nn.vip.xxx.com:8020:8020/user/b_incdata_rw/.sparkStaging/application_1419242629195_8432/firstsparkcode_2.11-1.0.jar > 14/12/22 23:00:25 INFO yarn.Client: Setting up the launch environment for our > AM container -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org