garawalid commented on a change in pull request #781:
URL: https://github.com/apache/parquet-mr/pull/781#discussion_r412497639



##########
File path: parquet-hadoop/README.md
##########
@@ -230,23 +236,28 @@ conf.set("parquet.bloom.filter.expected.ndv#column.path", 
200)
 ## Class: ParquetInputFormat
 
 **Property:** `parquet.read.support.class`  
-**Description:** The read support class.
+**Description:** The read support class that is used in
+ParquetInputFormat to materialize records. It should be a the descendant class 
of `org.apache.parquet.hadoop.api.ReadSupport`
 
 ---
 
 **Property:** `parquet.read.filter`  
-**Description:** **Todo**
+**Description:** The filter class name that implements 
`org.apache.parquet.filter.UnboundRecordFilter`. This class is for the old 
filter API in the package `org.apache.parquet.filter`, it filters records 
during record assembly.
 
 ---
 
-**Property:** `parquet.strict.typing`  
-**Description:** Whether to enable type checking for conflicting schema.  
-**Default value:** `true`
+ **Property:** `parquet.private.read.filter.predicate`  
+ **Description:** The filter class used in the new filter API in the package 
`org.apache.parquet.filter2.predicate`
+ Note that this class should implements 
`org.apache.parquet.filter2..FilterPredicate` and the value of this property 
should be a gzip compressed base64 encoded java serialized object.  

Review comment:
       I think it's okay if we keep the details of the object. After all, we 
will suggest using the `setFilterPredicate` method.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to