Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/5403#issuecomment-90776257
@jerryshao I've found it can be tricky to configure a ram disk to be the
correct size for this, making something like this easier. However, if folks
have generally found that to be a suitable solution, I'm happy to just close
this patch.
Re: the two layers of abstraction, I don't think there's any reason to do a
sort-based shuffle in-memory. The point of the sort-based shuffle is to
improve Spark's use of disk by storing just one file for each map task, rather
than opening <# reduce tasks> files for each map task (which makes some file
systems like ext3 struggle, and also leads to much seek-ier disk use). As long
as data gets stored in memory, I can't think of any reason why using the
sort-based shuffle would improve performance (and there is some - likely small
- performance cost of sorting all of the data). Are there other reasons you
can think of that you'd want to use an in-memory version of the sort-based
shuffle?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]