Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3525#issuecomment-65098147
@arahuja recently had YARN killing jobs until he bumped memory-overhead to
4-5GB, on executor memory of 20-30GB, so the hard-coded 7% was not enough for
him. In general, this fraction should be configurable; maybe some people want
<7% too! 7% is not special, afaik.
@sryza has given me the impression that the overhead tends to grow
proportionally to the executor memory, meaning that allowing people to
configure the *fraction* makes as much or more sense than having them do some
division and tweak their cmd-line flag for every job in order to specify an
absolute amount of memory.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]