[jira] [Commented] (SPARK-30316) data size boom after shuffle writing dataframe save as parquet

2019-12-23 Thread Xiao Li (Jira)
[ https://issues.apache.org/jira/browse/SPARK-30316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002529#comment-17002529 ] Xiao Li commented on SPARK-30316: - The compression ratio depends on your data layout, instead of number

[jira] [Commented] (SPARK-30316) data size boom after shuffle writing dataframe save as parquet

2019-12-22 Thread Cesc (Jira)
[ https://issues.apache.org/jira/browse/SPARK-30316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002049#comment-17002049 ] Cesc commented on SPARK-30316: --- However, the rows of two results are the same. > data size boom after

[jira] [Commented] (SPARK-30316) data size boom after shuffle writing dataframe save as parquet

2019-12-21 Thread Terry Kim (Jira)
[ https://issues.apache.org/jira/browse/SPARK-30316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17001833#comment-17001833 ] Terry Kim commented on SPARK-30316: --- This is a possible scenario because when you repartition/shuffle