[
https://issues.apache.org/jira/browse/PARQUET-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17268620#comment-17268620
]
ASF GitHub Bot commented on PARQUET-1964:
-----------------------------------------
shangxinli commented on a change in pull request #856:
URL: https://github.com/apache/parquet-mr/pull/856#discussion_r561018198
##########
File path:
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java
##########
@@ -1046,6 +1046,8 @@ private ColumnIndexStore getColumnIndexStore(int
blockIndex) {
}
private RowRanges getRowRanges(int blockIndex) {
+ assert FilterCompat
Review comment:
This private method is only called in two places and both ensure NOOP
won't present. That is probably why the tests with NOOP in many places can
pass. Given it is a private method, that is fine but it reminds me that the
public method calculateRowRanges() doesn't check if the passed-in filter is
null/noop. Do we need to check that too?
Another thing is should we use assert or exception because assert could be
turned off. There is some discussion about using assert or exception in stack
overflow(https://stackoverflow.com/questions/1276308/exception-vs-assertion#:~:text=Use%20assertions%20for%20internal%20logic,should%20be%20explicit%20using%20exceptions.).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Properly handle missing/null filter
> -----------------------------------
>
> Key: PARQUET-1964
> URL: https://issues.apache.org/jira/browse/PARQUET-1964
> Project: Parquet
> Issue Type: Improvement
> Reporter: Yuming Wang
> Assignee: Gabor Szadovszky
> Priority: Major
>
> How to reproduce this issue:
> {code:scala}
> val hadoopInputFile = HadoopInputFile.fromPath(new
> Path("/path/to/parquet/000.snappy.parquet"), new Configuration())
> val reader = ParquetFileReader.open(hadoopInputFile)
> val recordCount = reader.getFilteredRecordCount
> reader.close()
> {code}
> Output:
> {noformat}
> java.lang.NullPointerException was thrown.
> java.lang.NullPointerException
> at
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.calculateRowRanges(ColumnIndexFilter.java:81)
> at
> org.apache.parquet.hadoop.ParquetFileReader.getRowRanges(ParquetFileReader.java:961)
> at
> org.apache.parquet.hadoop.ParquetFileReader.getFilteredRecordCount(ParquetFileReader.java:766)
> {noformat}
> UPDATE: This is not only about the potential NPE if a {{null}} filter is set
> but to handle the missing/null filter in a better performing way. (Currently
> a NOOP filter implementation is used by default if no filter is set which
> requires to load the related data for column index/bloom filter even if no
> actual filtering will occur.)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)