[ 
https://issues.apache.org/jira/browse/PARQUET-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated PARQUET-182:
----------------------------------
    Fix Version/s:     (was: 1.9.0)

> FilteredRecordReader skips rows it shouldn't for schema with optional columns
> -----------------------------------------------------------------------------
>
>                 Key: PARQUET-182
>                 URL: https://issues.apache.org/jira/browse/PARQUET-182
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.5.0, 1.6.0, 1.7.0
>         Environment: Linux, Java7/Java8
>            Reporter: Steven Mellinger
>            Priority: Blocker
>
> When using UnboundRecordFilter with nested AND/OR filters over OPTIONAL 
> columns, there seems to be a case with a mismatch between the current 
> record's column value and the value read during filtering.
> The structure of my filter predicate that results in incorrect filtering is: 
> (x && (y || z))
> When I step through it with a debugger I can see that the value being read 
> from the ColumnReader inside my Predicate is different than the value for 
> that row.
> Looking deeper there seems to be a buffer with dictionary keys in 
> RunLenghBitPackingHybridDecoder (I am using RLE). There are only two 
> different keys in this array, [0,1], whereas my optional column has three 
> different values, [null,0,1]. If I had a column with values 5,10,10,null,10, 
> and keys 0 -> 5 and 1 -> 10, the buffer would hold 0,1,1,1,0, and in the case 
> that it reads the last row, would return 0 -> 5.
> So it seems that nothing is keeping track of where nulls appear.
> Hope someone can take a look, as it is a blocker for my project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to