Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-92653950
The trade off is in predictability and how easily users can interpret the
implications of the options they are passing. I think it could be fine
later on to add something more advanced where we scale up, but given that
we already have well defined semantics in other modes (which have their own
documentation, examples etc), my straw man approach is to just preserve the
semantics already associated with those modes. I.e. the semantics of
requesting fixed-sized executors, and then letting the number of executors
be potentially elastic.
On Mon, Apr 13, 2015 at 11:15 PM, Timothy Chen <[email protected]>
wrote:
> @andrewor14 <https://github.com/andrewor14> What's the rationale to have
> exact cores per executor vs allowing any cores that is between
> maxCoresPerExecutor and executor cores? It seems like the cookie cutting
> amount is hard to predict in a dynamic cluster, so having the flexibility
> (which the current coarse grain mode has!) helps users to be able to
launch
> one if the opportunity is there.
>
> â
> Reply to this email directly or view it on GitHub
> <https://github.com/apache/spark/pull/4027#issuecomment-92631828>.
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]