Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-92553493
By the way, just an update on this. @pwendell and I think we should use the
same approach on Mesos as we do on YARN and will do so on standalone mode as in
#731. There the only configuration is `spark.executor.cores`, which specifies
the exact number of cores an executor will use. Then, if a slave has 10 cores
and this config is set to 3, then we will launch 3 executors on the slave. It's
true that it won't grab all the cores in the cluster as before, but this allows
us to reuse the same config and behavior in other modes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]