jshmchenxi commented on a change in pull request #2642:
URL: https://github.com/apache/iceberg/pull/2642#discussion_r753761007



##########
File path: 
parquet/src/main/java/org/apache/iceberg/parquet/ColumnConfigParser.java
##########
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.parquet;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.function.BiConsumer;
+import java.util.function.Function;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * TODO: Once org.apache.parquet.hadoop.ColumnConfigParser is made public, 
should replace this class.

Review comment:
       Thanks for the review. I've updated the configurations to be simialr to 
metrics.

##########
File path: parquet/src/test/java/org/apache/iceberg/parquet/TestParquet.java
##########
@@ -127,4 +140,80 @@ public void testNumberOfBytesWritten() throws IOException {
         records.toArray(new GenericData.Record[]{}));
     return Pair.of(file, size);
   }
+
+  @Test
+  public void testBloomFilterWriteRead() throws IOException {
+    File parquetFile = generateFileWithBloomFilter();
+
+    try (ParquetFileReader reader = 
ParquetFileReader.open(ParquetIO.file(localInput(parquetFile)))) {
+      BlockMetaData rowGroup = reader.getRowGroups().get(0);
+      BloomFilterReader bloomFilterDataReader = 
reader.getBloomFilterDataReader(rowGroup);
+
+      ColumnChunkMetaData intColumn = rowGroup.getColumns().get(0);
+      BloomFilter intBloomFilter = 
bloomFilterDataReader.readBloomFilter(intColumn);
+      Assert.assertTrue(intBloomFilter.findHash(intBloomFilter.hash(30)));

Review comment:
       The hash values might conflict so I didn't add exclusion tests.

##########
File path: site/docs/configuration.md
##########
@@ -40,6 +40,9 @@ Iceberg tables support table properties to configure table 
behavior, like the de
 | write.parquet.dict-size-bytes      | 2097152 (2 MB)     | Parquet dictionary 
page size                       |
 | write.parquet.compression-codec    | gzip               | Parquet 
compression codec                          |
 | write.parquet.compression-level    | null               | Parquet 
compression level                          |
+| write.parquet.bloom-filter-enabled | false | Whether to enable writing bloom 
filter; If it is true, the bloom filter will be enable for all columns; If it 
is false, it will be disabled for all columns; It is also possible to enable it 
for some columns by specifying the column name within the property followed by 
#; For example, setting both `write.parquet.bloom-filter-enabled=true` and 
`write.parquet.bloom-filter-enabled#some_column=false` will enable bloom filter 
for all columns except `some_column` |
+| write.parquet.bloom-filter-max-bytes | 1048576 (1 MB) | The maximum number 
of bytes for a bloom filter bitset |
+| write.parquet.bloom-filter-expected-ndv | (not set) | The expected number of 
distinct values in a column, it is used to compute the optimal size of the 
bloom filter; Note that if this property is not set, the bloom filter will use 
the maximum size; If this property is set for a column, then no need to enable 
the bloom filter with `write.parquet.bloom-filter-enabled` property; For 
example, setting `write.parquet.bloom-filter-expected-ndv#some_column=200` will 
enable bloom filter for `some_column` with expected number of distinct values 
equals to 200 |

Review comment:
       Got it!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to