Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/2746#issuecomment-59854068
@sryza What's the policy used by MR/Tez? When talking to Patrick/Andrew
offline, I'd argued for allocating executors such that the total executors
given to the driver is equal to the (# pending tasks) / (cores per executor)
(subject, of course, to fairness constraints that might limit this to be less),
which I thought might provide better "out of the box" behavior without needing
to set config parameters. This approach was vetoed by others as too hard to
understand. So, I'm curious what MR/Tez do and whether their approach is
perceived as easy to understand?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]