[ 
https://issues.apache.org/jira/browse/ARROW-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17563995#comment-17563995
 ] 

Charlie Gao commented on ARROW-17008:
-------------------------------------

Ok sure. R simply wraps C ints (4bytes) and C doubles (8 bytes). Just a bit 
puzzling how the double version managed to compress to 4.3MB vs 4.6MB for int 
on default settings.

I am actually seeing some compression savings now using parquet-tools with 
use-dictionary = FALSE. I guess this is equivalent to Spark now and it is 
probably the optimal setting for my particular data.

Thanks for your help.

> [R] Parquet Snappy Compression Fails for Integer Type Data
> ----------------------------------------------------------
>
>                 Key: ARROW-17008
>                 URL: https://issues.apache.org/jira/browse/ARROW-17008
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: R
>    Affects Versions: 8.0.0
>         Environment: R4.2.1 Ubuntu 22.04 x86_64
> R4.1.2 Ubuntu 22.04 Aarch64
>            Reporter: Charlie Gao
>            Priority: Major
>
> Snappy compression is not working when writing to parquet for integer type 
> data.
> E.g. compare file sizes for:
> {code:r}
> write_parquet(data.frame(x = 1:1e6), "snappy.parquet", compression = "snappy")
> write_parquet(data.frame(x = 1:1e6), "uncomp.parquet", compression = 
> "uncompressed")
> {code}
> whereas for double:
> {code:r}
> write_parquet(data.frame(x = as.double(1:1e6)), "snappyd.parquet", 
> compression = "snappy")
> write_parquet(data.frame(x = as.double(1:1e6)), "uncompd.parquet", 
> compression = "uncompressed")
> {code}
> I have inspected the integer files using parquet-tools and compression level 
> shows as 0%. Needless to say, I can achieve compression using Spark 
> (sparklyr) etc.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to