Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-44366899
Agree with @tgravescs and @mridulm that a constant overhead makes more
sense.
@pwendell YARN includes the memory usage of subprocesses in its calculation.
Making the overhead configurable probably makes sense. PySpark could add a
fixed amount, and users might want to add more if they're allocating direct
byte buffers. Some compression codecs allocate direct byte buffers, so if we
want to get fancy, we could take that in to account.
I'm opposed to removing the 384 altogether. Having had to explain 2
bajillion times that two MR configs need to be updated every time one wants to
increase task memory, I've really appreciated that Spark handles this
automatically.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---