Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3252#issuecomment-63574787
@davies and I discussed this offline. During sorting, PySpark tries to
limit its total memory usage at 500 megabytes (by default; this can be
configured). If most of that memory was used up by broadcast variables, then
PySpark would spill constantly. This could happen even if there was extra free
memory in the machine because PySpark wouldn't exceed its limit to take
advantage of the free memory. This patch addresses this issue by allowing the
limit to increase in these cases.
ExternalAggregator already does this, too. This seems like a good fix, so
I'm going to merge it into `master` and `branch.1.2`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]