[
https://issues.apache.org/jira/browse/SPARK-19046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15794791#comment-15794791
]
Sean Owen commented on SPARK-19046:
-----------------------------------
Yes, Parquet should have some optimizations for serializing cases like this.
All of this seems roughly like what I'd expect. What are you proposing --
reimplement some special run compression for serialization of ranges? not
crazy, but just not sure it's worth it because this situation isn't going to be
common in real apps.
> Dataset checkpoint consumes too much disk space
> -----------------------------------------------
>
> Key: SPARK-19046
> URL: https://issues.apache.org/jira/browse/SPARK-19046
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Reporter: Assaf Mendelson
>
> Consider the following simple example:
> val df = spark.range(100000000)
> df.cache()
> df.count()
> df.checkpoint()
> df.write.parquet("/test1")
> Looking at the storage tab of the UI, the dataframe takes 97.5 MB.
> Looking at the checkpoint directory, the checkpoint takes 3.3GB (33 times
> larger!)
> Looking at the parquet directory, the dataframe takes 386MB
> Similar behavior can be seen on less synthetic examples.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]