[
https://issues.apache.org/jira/browse/ARROW-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17563985#comment-17563985
]
Charlie Gao commented on ARROW-17008:
-------------------------------------
Thanks for the quick response. I can replicate using `use_dictionary = FALSE`
I am running tests on my actual data and will let you know the results.
[https://arrow.apache.org/blog/2019/09/05/faster-strings-cpp-parquet/]
>From the above blog post, it seems to suggest that snappy compression works on
>the part that is plain and not dictionary-encoded. This would explain why it
>doesn't work on the reprex as all the numbers are unique so dictionary
>encoding would be the same as uncompressed.
I assume then that if the data is double type, then dictionary encoding is
turned off by default?
> [R] Parquet Snappy Compression Fails for Integer Type Data
> ----------------------------------------------------------
>
> Key: ARROW-17008
> URL: https://issues.apache.org/jira/browse/ARROW-17008
> Project: Apache Arrow
> Issue Type: Bug
> Components: R
> Affects Versions: 8.0.0
> Environment: R4.2.1 Ubuntu 22.04 x86_64
> R4.1.2 Ubuntu 22.04 Aarch64
> Reporter: Charlie Gao
> Priority: Major
>
> Snappy compression is not working when writing to parquet for integer type
> data.
> E.g. compare file sizes for:
> {code:r}
> write_parquet(data.frame(x = 1:1e6), "snappy.parquet", compression = "snappy")
> write_parquet(data.frame(x = 1:1e6), "uncomp.parquet", compression =
> "uncompressed")
> {code}
> whereas for double:
> {code:r}
> write_parquet(data.frame(x = as.double(1:1e6)), "snappyd.parquet",
> compression = "snappy")
> write_parquet(data.frame(x = as.double(1:1e6)), "uncompd.parquet",
> compression = "uncompressed")
> {code}
> I have inspected the integer files using parquet-tools and compression level
> shows as 0%. Needless to say, I can achieve compression using Spark
> (sparklyr) etc.
> Thanks.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)