[ 
https://issues.apache.org/jira/browse/SPARK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264979#comment-14264979
 ] 

Chris Albright commented on SPARK-4757:
---------------------------------------

Is there an ETA on when this might make it into a release? Or, can we checkout 
a commit and build locally? I don't see a commit hash for the fix.

> Yarn-client failed to start due to Wrong FS error in distCacheMgr.addResource
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-4757
>                 URL: https://issues.apache.org/jira/browse/SPARK-4757
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.3.0
>            Reporter: Jianshi Huang
>             Fix For: 1.2.0, 1.3.0
>
>
> I got the following error during Spark startup (Yarn-client mode):
> 14/12/04 19:33:58 INFO Client: Uploading resource 
> file:/x/home/jianshuang/spark/spark-latest/lib/datanucleus-api-jdo-3.2.6.jar 
> -> 
> hdfs://stampy/user/jianshuang/.sparkStaging/application_1404410683830_531767/datanucleus-api-jdo-3.2.6.jar
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://stampy/user/jianshuang/.sparkStaging/application_1404410683830_531767/datanucleus-api-jdo-3.2.6.jar,
>  expected: file:///
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:643)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:79)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:506)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:724)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:501)
>         at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
>         at 
> org.apache.spark.deploy.yarn.ClientDistributedCacheManager.addResource(ClientDistributedCacheManager.scala:67)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$5.apply(ClientBase.scala:257)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$5.apply(ClientBase.scala:242)
>         at scala.Option.foreach(Option.scala:236)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$class.prepareLocalResources(ClientBase.scala:242)
>         at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:35)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$class.createContainerLaunchContext(ClientBase.scala:350)
>         at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:35)
>         at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:80)
>         at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
>         at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:140)
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:335)
>         at 
> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:986)
>         at $iwC$$iwC.<init>(<console>:9)
>         at $iwC.<init>(<console>:18)
>         at <init>(<console>:20)
>         at .<init>(<console>:24)
> According to Liancheng and Andrew, this hotfix might be the root cause:
>  
> https://github.com/apache/spark/commit/38cb2c3a36a5c9ead4494cbc3dde008c2f0698ce
> Jianshi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to