[
https://issues.apache.org/jira/browse/SPARK-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361323#comment-15361323
]
Saisai Shao commented on SPARK-15923:
-------------------------------------
[~WeiqingYang] This is the correct behavior. In the client mode, attemptId is
{{None}}, which means not required, and in the cluster mode attemptId should be
some number starting from 1. This is a by-design choice, because in the client
mode several attempts actually belongs to one Spark application, while in the
cluster mode each attempt is a new Spark application from spark's point. So we
need attempt id to distinguish different attempts(spark applications) in the
cluster mode.
Also this assumption exists in several places other than REST api, like
event-log file name, history URL. So you have to use different URLs for client
and cluster mode. From my understanding there's no issue here. Also reviewing
your patch, the change is not complete and solid.
> Spark Application rest api returns "no such app: <appId>"
> ---------------------------------------------------------
>
> Key: SPARK-15923
> URL: https://issues.apache.org/jira/browse/SPARK-15923
> Project: Spark
> Issue Type: Bug
> Affects Versions: 1.6.1
> Reporter: Yesha Vora
>
> Env : secure cluster
> Scenario:
> * Run SparkPi application in yarn-client or yarn-cluster mode
> * After application finishes, check Spark HS rest api to get details like
> jobs / executor etc.
> {code}
> http://<host>:18080/api/v1/applications/application_1465778870517_0001/1/executors{code}
>
> Rest api return HTTP Code: 404 and prints "HTTP Data: no such app:
> application_1465778870517_0001"
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]