[ 
https://issues.apache.org/jira/browse/PARQUET-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707798#comment-17707798
 ] 

ASF GitHub Bot commented on PARQUET-2260:
-----------------------------------------

wgtmac commented on code in PR #1043:
URL: https://github.com/apache/parquet-mr/pull/1043#discussion_r1155587214


##########
parquet-column/src/main/java/org/apache/parquet/column/impl/ColumnWriterBase.java:
##########
@@ -97,7 +97,7 @@ abstract class ColumnWriterBase implements ColumnWriter {
       int optimalNumOfBits = 
BlockSplitBloomFilter.optimalNumOfBits(ndv.getAsLong(), fpp.getAsDouble());
       this.bloomFilter = new BlockSplitBloomFilter(optimalNumOfBits / 8, 
maxBloomFilterSize);
     } else {
-      this.bloomFilter = new BlockSplitBloomFilter(maxBloomFilterSize);
+      this.bloomFilter = BlockSplitBloomFilter.of(maxBloomFilterSize);

Review Comment:
   The goal is to fix the issue that bloom filter size can be larger than 
`parquet.bloom.filter.max.bytes`. The problematic constructors used are as 
below:
   
   ```java
       int maxBloomFilterSize = props.getMaxBloomFilterBytes();
   
       OptionalLong ndv = props.getBloomFilterNDV(path);
       OptionalDouble fpp = props.getBloomFilterFPP(path);
       // If user specify the column NDV, we construct Bloom filter from it.
       if (ndv.isPresent()) {
         int optimalNumOfBits = 
BlockSplitBloomFilter.optimalNumOfBits(ndv.getAsLong(), fpp.getAsDouble());
         this.bloomFilter = new BlockSplitBloomFilter(optimalNumOfBits / 8, 
maxBloomFilterSize);
       } else {
         this.bloomFilter = new BlockSplitBloomFilter(maxBloomFilterSize);
       }
   ```
   
   They are all delegated by `public BlockSplitBloomFilter(int numBytes, int 
minimumBytes, int maximumBytes, HashStrategy hashStrategy)`. 
   ```java
     public BlockSplitBloomFilter(int numBytes) {
       this(numBytes, LOWER_BOUND_BYTES, UPPER_BOUND_BYTES, HashStrategy.XXH64);
     }
   
     public BlockSplitBloomFilter(int numBytes, int maximumBytes) {
       this(numBytes, LOWER_BOUND_BYTES, maximumBytes, HashStrategy.XXH64);
     }
   ```
   
   So it seems to me that the issue comes from `public 
BlockSplitBloomFilter(int numBytes)` where `maxBloomFilterSize` is passed to 
`numBytes` and the actual size can be adjusted to values as large as 
`UPPER_BOUND_BYTES`.
   
   Is it possible to fix it like below?
   ```java
     public BlockSplitBloomFilter(int numBytes) {
       this(numBytes, LOWER_BOUND_BYTES, numBytes, HashStrategy.XXH64);
     }
   ```
   





>  Bloom filter bytes size shouldn't be larger than maxBytes size in the 
> configuration
> ------------------------------------------------------------------------------------
>
>                 Key: PARQUET-2260
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2260
>             Project: Parquet
>          Issue Type: Bug
>            Reporter: Mars
>            Assignee: Mars
>            Priority: Major
>
> Before this PR: If {{parquet.bloom.filter.max.bytes}} configuration is not a 
> power of 2 value, the size of the bloom filter generated will exceed this 
> value. For example, now if set {{parquet.bloom.filter.max.bytes}} as 1024 * 
> 1024+1= 1048577 , the bytes size of bloom filter generated will be 1024 * 
> 1024 * 2 = 2097152. This does not match the definition of the parameter
> After this PR: set this value to the largest power of two less than 
> {{parquet.bloom.filter.max.bytes}} and It should be 1024 * 1024



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to