Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10768#issuecomment-172510959
Thanks for clarifying. I think this is a bit confusing. You want to respect
the Mesos constraints that belong to each particular job that is submitted to
the dispatcher. Those configuration options should come from a submission
request, not from the Spark config options that are used to launch the
dispatcher (probably found inside `schedulerProperties`)
Right now, the code would only launch drivers on one particular set of
constraints, defined when the dispatcher is launched. I believe the better
solution is to allow each Spark job to define its Mesos constraints
independently, when submitting.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]