[
https://issues.apache.org/jira/browse/SPARK-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Thomas Graves resolved SPARK-1395.
----------------------------------
Resolution: Fixed
Fix Version/s: (was: 1.0.0)
1.1.0
> Cannot launch jobs on Yarn cluster with "local:" scheme in SPARK_JAR
> --------------------------------------------------------------------
>
> Key: SPARK-1395
> URL: https://issues.apache.org/jira/browse/SPARK-1395
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.0.0
> Reporter: Marcelo Vanzin
> Assignee: Marcelo Vanzin
> Fix For: 1.1.0
>
>
> If you define SPARK_JAR and friends to use "local:" URIs, you cannot submit a
> job on a Yarn cluster. e.g., I have:
> SPARK_JAR=local:/tmp/spark-assembly-1.0.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar
> SPARK_YARN_APP_JAR=local:/tmp/spark-examples-assembly-1.0.0-SNAPSHOT.jar
> And running SparkPi using bin/run-example yields this:
> 14/04/02 13:23:33 INFO yarn.Client: Preparing Local resources
> Exception in thread "main" java.io.IOException: No FileSystem for scheme:
> local
> at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2385)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at
> org.apache.spark.deploy.yarn.ClientBase$class.org$apache$spark$deploy$yarn$ClientBase$$copyRemoteFile(ClientBase.scala:156)
> at
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$3.apply(ClientBase.scala:217)
> at
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$3.apply(ClientBase.scala:212)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at
> org.apache.spark.deploy.yarn.ClientBase$class.prepareLocalResources(ClientBase.scala:212)
> at
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:41)
> at org.apache.spark.deploy.yarn.Client.runApp(Client.scala:76)
> at
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:81)
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:129)
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:226)
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:96)
> at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
> at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
--
This message was sent by Atlassian JIRA
(v6.2#6252)