GitHub user aramesh117 opened a pull request:
https://github.com/apache/spark/pull/17024
[SPARK-19525][CORE] Compressing checkpoints.
Spark's performance improves greatly if we enable compression of
checkpoints.
## What changes were proposed in this pull request?
- Compress each partition before writing to persistent file system.
- Decompress each partition before reading from persistent file system.
- Default behavior should be to not compress.
- Add logging for checkpoint durations for A/B testing with and without
compression enabled.
## How was this patch tested?
This was tested using existing unit tests for backwards compatibility and
with new tests for this functionality. It has also been used in our production
system for almost a year.
Please review http://spark.apache.org/contributing.html before opening a
pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/aramesh117/spark master
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/17024.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #17024
----
commit 7837b0c6052fa20bd1a6cf823947e95379d6d3b8
Author: Aaditya Ramesh <[email protected]>
Date: 2017-02-22T05:05:48Z
[SPARK-19525][CORE] Compressing checkpoints.
Spark's performance improves greatly if we enable compression of
checkpoints.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]