[ 
https://issues.apache.org/jira/browse/PARQUET-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Szadovszky resolved PARQUET-1964.
---------------------------------------
    Resolution: Fixed

> Properly handle missing/null filter
> -----------------------------------
>
>                 Key: PARQUET-1964
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1964
>             Project: Parquet
>          Issue Type: Improvement
>            Reporter: Yuming Wang
>            Assignee: Gabor Szadovszky
>            Priority: Major
>
> How to reproduce this issue:
> {code:scala}
> val hadoopInputFile = HadoopInputFile.fromPath(new 
> Path("/path/to/parquet/000.snappy.parquet"), new Configuration())
> val reader = ParquetFileReader.open(hadoopInputFile)
> val recordCount = reader.getFilteredRecordCount
> reader.close()
> {code}
> Output:
> {noformat}
> java.lang.NullPointerException was thrown.
> java.lang.NullPointerException
>       at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.calculateRowRanges(ColumnIndexFilter.java:81)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.getRowRanges(ParquetFileReader.java:961)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.getFilteredRecordCount(ParquetFileReader.java:766)
> {noformat}
> UPDATE: This is not only about the potential NPE if a {{null}} filter is set 
> but to handle the missing/null filter in a better performing way. (Currently 
> a NOOP filter implementation is used by default if no filter is set which 
> requires to load the related data for column index/bloom filter even if no 
> actual filtering will occur.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to