[
https://issues.apache.org/jira/browse/PARQUET-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17708670#comment-17708670
]
ASF GitHub Bot commented on PARQUET-2256:
-----------------------------------------
mapleFU commented on PR #195:
URL: https://github.com/apache/parquet-format/pull/195#issuecomment-1496947204
> General question about approach of compression vs using something like
RLE/BitPacked encoding from the spec for performance on Sparse and where values
are close to NDV?
I'm not so familiar with compression, but I guess it's not so easy to regard
SPBF as a RLE/Bit-packing able input stream. It's just random bits and bytes...
> Adding Compression for BloomFilter
> ----------------------------------
>
> Key: PARQUET-2256
> URL: https://issues.apache.org/jira/browse/PARQUET-2256
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-format
> Affects Versions: format-2.9.0
> Reporter: Xuwei Fu
> Assignee: Xuwei Fu
> Priority: Major
>
> In Current Parquet implementions, if BloomFilter doesn't set the ndv, most
> implementions will guess the 1M as the ndv. And use it for fpp. So, if fpp is
> 0.01, the BloomFilter size may grows to 2M for each column, which is really
> huge. Should we support compression for BloomFilter, like:
>
> ```
> /**
> * The compression used in the Bloom filter.
> **/
> struct Uncompressed {}
> union BloomFilterCompression {
> 1: Uncompressed UNCOMPRESSED;
> +2: CompressionCodec COMPRESSION;
> }
> ```
--
This message was sent by Atlassian Jira
(v8.20.10#820010)