Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1218#issuecomment-50385569
@vanzin Thanks for your PR, I left some comments inline. The main points
are the following: I'm not sure what it means for an application to have an
option of application ID. From the perspective of an application it should
always have an ID. Also, it seems kinda weird to me that the ID is a property
of the task scheduler. It might make sense to expose it through a general
`spark.app.id` so we can have one mechanism for handling all of these different
cluster modes. We might even call this `spark.internal.app.id` in case we're
worried that the user will go ahead and set it themselves.
From your description I take it that you haven't had the chance to test
this on Mesos. I think it makes sense to hold back on adding this behavior
there for now and instead file a JIRA for it. Also, now that #1094 is merged, I
assume part of your changes in the YARN code needs to be reverted.
When you have a chance, please rebase this to the latest master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---