Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/1391#issuecomment-49348179
Bringing the discussion back online. Thanks for all the input so far.
Ran a few experiments yday and today. Number of executors (which was the
other main handle we wanted to factor in) doesn't seem to have any noticeable
impact. Tried a few other parameters such as num_partitions,
default_parallelism but nothing sticks. Confirmed the proportionality with
container size. Have also been trying to tune the multiplier to minimize
potential waste and I think 6% (as opposed to 7% as we currently have) is the
lowest we should go. Modifying the PR accordingly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---