[ 
https://issues.apache.org/jira/browse/PARQUET-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17268637#comment-17268637
 ] 

ASF GitHub Bot commented on PARQUET-1964:
-----------------------------------------

gszadovszky commented on a change in pull request #856:
URL: https://github.com/apache/parquet-mr/pull/856#discussion_r561042620



##########
File path: 
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java
##########
@@ -1046,6 +1046,8 @@ private ColumnIndexStore getColumnIndexStore(int 
blockIndex) {
   }
 
   private RowRanges getRowRanges(int blockIndex) {
+    assert FilterCompat

Review comment:
       I think, we do not need to cover the cases of `NOOP` or `null` filters 
in every places of the API. `ColumnIndexFilter` is not really for public use. I 
do not see why a client would use it directly. I think, the current handling is 
correct: NPE is thrown in case of null and the correct filtering is done 
otherwise.
   
   What I wanted to cover here is to handle the `null` filters properly where 
the user faces it (by using setting the filter in `ParquetReadOptions`) and 
also to ensure there are no unnecessary reads (for column index or bloom 
filter) in case of `NOOP`.
   
   I think this issue is not a bug anyway. There is no correctness issue if the 
user does not set the filter explicitly to `null`.
   
   About `assert`. I think this is a good tool to ensure something that should 
happen anyway. I agree it is not suitable for checking anything in a public 
method but where the caller is properly controlled (e.g. for a private method) 
I think, it is fine. I think `assert` is fine if you could simply skip it 
without the risk of any failures. While keeping it in a private method does not 
have a performance penalty in production environment the unit tests would catch 
any issue if the callers are modified.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


> Properly handle missing/null filter
> -----------------------------------
>
>                 Key: PARQUET-1964
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1964
>             Project: Parquet
>          Issue Type: Improvement
>            Reporter: Yuming Wang
>            Assignee: Gabor Szadovszky
>            Priority: Major
>
> How to reproduce this issue:
> {code:scala}
> val hadoopInputFile = HadoopInputFile.fromPath(new 
> Path("/path/to/parquet/000.snappy.parquet"), new Configuration())
> val reader = ParquetFileReader.open(hadoopInputFile)
> val recordCount = reader.getFilteredRecordCount
> reader.close()
> {code}
> Output:
> {noformat}
> java.lang.NullPointerException was thrown.
> java.lang.NullPointerException
>       at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.calculateRowRanges(ColumnIndexFilter.java:81)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.getRowRanges(ParquetFileReader.java:961)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.getFilteredRecordCount(ParquetFileReader.java:766)
> {noformat}
> UPDATE: This is not only about the potential NPE if a {{null}} filter is set 
> but to handle the missing/null filter in a better performing way. (Currently 
> a NOOP filter implementation is used by default if no filter is set which 
> requires to load the related data for column index/bloom filter even if no 
> actual filtering will occur.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to