flyrain commented on a change in pull request #2642:
URL: https://github.com/apache/iceberg/pull/2642#discussion_r745214311



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/ParquetWriter.java
##########
@@ -197,9 +198,11 @@ private void startRowGroup() {
 
     PageWriteStore pageStore = pageStoreCtorParquet.newInstance(
         compressor, parquetSchema, props.getAllocator(), 
this.columnIndexTruncateLength);
+    Preconditions.checkState(pageStore instanceof BloomFilterWriteStore,
+        "pageStore must be an instance of BloomFilterWriteStore");

Review comment:
       Do we need this check? It throws a `ClassCastException` at line 205 
anyway.

##########
File path: site/docs/configuration.md
##########
@@ -40,6 +40,9 @@ Iceberg tables support table properties to configure table 
behavior, like the de
 | write.parquet.dict-size-bytes      | 2097152 (2 MB)     | Parquet dictionary 
page size                       |
 | write.parquet.compression-codec    | gzip               | Parquet 
compression codec                          |
 | write.parquet.compression-level    | null               | Parquet 
compression level                          |
+| write.parquet.bloom-filter-enabled | false | Whether to enable writing bloom 
filter; If it is true, the bloom filter will be enable for all columns; If it 
is false, it will be disabled for all columns; It is also possible to enable it 
for some columns by specifying the column name within the property followed by 
#; For example, setting both `write.parquet.bloom-filter-enabled=true` and 
`write.parquet.bloom-filter-enabled#some_column=false` will enable bloom filter 
for all columns except `some_column` |

Review comment:
       BTW, how does a user specify multiple columns? like this or something 
else?
   ```
   write.parquet.bloom-filter-enabled#column1=false
   write.parquet.bloom-filter-enabled#column2=false
   ```

##########
File path: 
parquet/src/main/java/org/apache/iceberg/parquet/ColumnConfigParser.java
##########
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.parquet;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.function.BiConsumer;
+import java.util.function.Function;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * TODO: Once org.apache.parquet.hadoop.ColumnConfigParser is made public, 
should replace this class.
+ * Parses the specified key-values in the format of root.key#column.path from 
a {@link Configuration} object.
+ */
+class ColumnConfigParser {
+
+  private static class ConfigHelper<T> {
+    private final String prefix;
+    private final Function<String, T> function;
+    private final BiConsumer<String, T> consumer;
+
+    ConfigHelper(String prefix, Function<String, T> function, 
BiConsumer<String, T> consumer) {
+      this.prefix = prefix;
+      this.function = function;
+      this.consumer = consumer;
+    }
+
+    public void processKey(String key) {
+      if (key.startsWith(prefix)) {
+        String columnPath = key.substring(prefix.length());
+        T value = function.apply(key);
+        consumer.accept(columnPath, value);
+      }
+    }
+  }
+
+  private final List<ConfigHelper<?>> helpers = new ArrayList<>();
+
+  public <T> ColumnConfigParser withColumnConfig(String rootKey, 
Function<String, T> function,
+      BiConsumer<String, T> consumer) {
+    helpers.add(new ConfigHelper<T>(rootKey + '#', function, consumer));
+    return this;
+  }
+
+  public void parseConfig(Configuration conf) {
+    for (Map.Entry<String, String> entry : conf) {
+      for (ConfigHelper<?> helper : helpers) {
+        // We retrieve the value from function instead of parsing from the 
string here to use the exact implementations
+        // in Configuration
+        helper.processKey(entry.getKey());
+      }
+    }
+  }
+
+  public void parseConfig(Map<String, String> conf) {
+    for (Map.Entry<String, String> entry : conf.entrySet()) {
+      for (ConfigHelper<?> helper : helpers) {
+        // We retrieve the value from function instead of parsing from the 
string here to use the exact implementations
+        // in Configuration
+        helper.processKey(entry.getKey());
+      }
+    }

Review comment:
       extract it to a function `parseConfig(Iterable<Map.Entry<String,String>> 
entrySet)` to avoid duplication?

##########
File path: site/docs/configuration.md
##########
@@ -40,6 +40,9 @@ Iceberg tables support table properties to configure table 
behavior, like the de
 | write.parquet.dict-size-bytes      | 2097152 (2 MB)     | Parquet dictionary 
page size                       |
 | write.parquet.compression-codec    | gzip               | Parquet 
compression codec                          |
 | write.parquet.compression-level    | null               | Parquet 
compression level                          |
+| write.parquet.bloom-filter-enabled | false | Whether to enable writing bloom 
filter; If it is true, the bloom filter will be enable for all columns; If it 
is false, it will be disabled for all columns; It is also possible to enable it 
for some columns by specifying the column name within the property followed by 
#; For example, setting both `write.parquet.bloom-filter-enabled=true` and 
`write.parquet.bloom-filter-enabled#some_column=false` will enable bloom filter 
for all columns except `some_column` |

Review comment:
       Minor suggestions. How about this?
   ```
   To enable or disable bloom filter for all columns by default. Partial 
enabling and disabling are possible by specifying the column name. For example, 
...
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to