[
https://issues.apache.org/jira/browse/SPARK-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232543#comment-14232543
]
Hong Shen commented on SPARK-4712:
----------------------------------
The reason is the HDFS is HA mode, ant spark client don't recognize
“mycluster”, when InetAddress.getByName(srcHost).getCanonicalHostName(), will
throw UnknownHostException.
I will add a patch to fix it.
> uploading jar when set spark.yarn.jar
> --------------------------------------
>
> Key: SPARK-4712
> URL: https://issues.apache.org/jira/browse/SPARK-4712
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.1.0
> Reporter: Hong Shen
>
> when I set
> spark.yarn.jar
> hdfs://mycluster/user/tdw/spark/d03/spark-assembly-1.1.0-hadoop2.2.0.jar
> spark app will uploading jar,
> 2014-12-03 10:34:41,241 INFO yarn.Client (Logging.scala:logInfo(59)) -
> Uploading
> hdfs://mycluster/user/tdw/spark/d03/spark-assembly-1.1.0-hadoop2.2.0.jar to
> hdfs://mycluster/user/yarn/.sparkStaging/application_1417501428315_1544/spark-assembly-1.1.0-hadoop2.2.0.jar
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]