[
https://issues.apache.org/jira/browse/PARQUET-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17492593#comment-17492593
]
Ze'ev Maor commented on PARQUET-2122:
-------------------------------------
[~junjie] thanks, that worked, though it does seem odd that a MAX size on bloom
filter of 1MB would actually result in 1MB used by a Bloom filter on a column
with cardinality of just 14 isn't it?
> Adding Bloom filter to small Parquet file bloats in size X1700
> --------------------------------------------------------------
>
> Key: PARQUET-2122
> URL: https://issues.apache.org/jira/browse/PARQUET-2122
> Project: Parquet
> Issue Type: Bug
> Components: parquet-cli, parquet-mr
> Affects Versions: 1.13.0
> Reporter: Ze'ev Maor
> Priority: Critical
> Attachments: data.csv, data_index_bloom.parquet
>
>
> Converting a small, 14 rows/1 string column csv file to Parquet without bloom
> filter yields a 600B file, adding '.withBloomFilterEnabled(true)' to
> ParquetWriter then yields a 1049197B file.
> It isn't clear what the extra space is used by.
> Attached csv and bloated Parquet files.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)