[jira] [Updated] (DRILL-4349) parquet reader returns wrong results when reading a nullable column that starts with a large number of nulls (>30k)
[ https://issues.apache.org/jira/browse/DRILL-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Altekruse updated DRILL-4349: --- Fix Version/s: (was: 1.6.0) 1.5.0 > parquet reader returns wrong results when reading a nullable column that > starts with a large number of nulls (>30k) > --- > > Key: DRILL-4349 > URL: https://issues.apache.org/jira/browse/DRILL-4349 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Parquet >Affects Versions: 1.4.0 >Reporter: Deneche A. Hakim >Assignee: Jason Altekruse >Priority: Critical > Fix For: 1.5.0 > > Attachments: drill4349.tar.gz > > > While reading a nullable column, if in a single pass we only read null > values, the parquet reader resets the value of pageReader.readPosInBytes > which will lead to wrong data read from the file. > To reproduce the issue, create a csv file (repro.csv) with 2 columns (id, > val) with 50100 rows, where id equals to the row number and val is empty for > the first 50k rows, and equal to id for the remaining rows. > create a parquet table from the csv file: > {noformat} > CREATE TABLE `repro_parquet` AS SELECT CAST(columns[0] AS INT) AS id, > CAST(NULLIF(columns[1], '') AS DOUBLE) AS val from `repro.csv`; > {noformat} > Now if you query any of the non null values you will get wrong results: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro_parquet` where id>=5 limit > 10; > ++---+ > | id |val| > ++---+ > | 5 | 9.11337776337441E-309 | > | 50001 | 3.26044E-319 | > | 50002 | 1.4916681476489723E-154 | > | 50003 | 2.18890676| > | 50004 | 2.681561588521345E154 | > | 50005 | -2.1016574E-317 | > | 50006 | -1.4916681476489723E-154 | > | 50007 | -2.18890676 | > | 50008 | -2.681561588521345E154| > | 50009 | 2.1016574E-317| > ++---+ > 10 rows selected (0.238 seconds) > {noformat} > and here are the expected values: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro.csv` where cast(columns[0] as > int)>=5 limit 10; > ++ > | columns | > ++ > | ["5","5"] | > | ["50001","50001"] | > | ["50002","50002"] | > | ["50003","50003"] | > | ["50004","50004"] | > | ["50005","50005"] | > | ["50006","50006"] | > | ["50007","50007"] | > | ["50008","50008"] | > | ["50009","50009"] | > ++ > {noformat} > I confirmed that the file is written correctly and the issue is in the > parquet reader (already have a fix for it) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-4349) parquet reader returns wrong results when reading a nullable column that starts with a large number of nulls (>30k)
[ https://issues.apache.org/jira/browse/DRILL-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chun Chang updated DRILL-4349: -- Reviewer: Chun Chang > parquet reader returns wrong results when reading a nullable column that > starts with a large number of nulls (>30k) > --- > > Key: DRILL-4349 > URL: https://issues.apache.org/jira/browse/DRILL-4349 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Parquet >Affects Versions: 1.4.0 >Reporter: Deneche A. Hakim >Assignee: Parth Chandra >Priority: Critical > Fix For: 1.6.0 > > Attachments: drill4349.tar.gz > > > While reading a nullable column, if in a single pass we only read null > values, the parquet reader resets the value of pageReader.readPosInBytes > which will lead to wrong data read from the file. > To reproduce the issue, create a csv file (repro.csv) with 2 columns (id, > val) with 50100 rows, where id equals to the row number and val is empty for > the first 50k rows, and equal to id for the remaining rows. > create a parquet table from the csv file: > {noformat} > CREATE TABLE `repro_parquet` AS SELECT CAST(columns[0] AS INT) AS id, > CAST(NULLIF(columns[1], '') AS DOUBLE) AS val from `repro.csv`; > {noformat} > Now if you query any of the non null values you will get wrong results: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro_parquet` where id>=5 limit > 10; > ++---+ > | id |val| > ++---+ > | 5 | 9.11337776337441E-309 | > | 50001 | 3.26044E-319 | > | 50002 | 1.4916681476489723E-154 | > | 50003 | 2.18890676| > | 50004 | 2.681561588521345E154 | > | 50005 | -2.1016574E-317 | > | 50006 | -1.4916681476489723E-154 | > | 50007 | -2.18890676 | > | 50008 | -2.681561588521345E154| > | 50009 | 2.1016574E-317| > ++---+ > 10 rows selected (0.238 seconds) > {noformat} > and here are the expected values: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro.csv` where cast(columns[0] as > int)>=5 limit 10; > ++ > | columns | > ++ > | ["5","5"] | > | ["50001","50001"] | > | ["50002","50002"] | > | ["50003","50003"] | > | ["50004","50004"] | > | ["50005","50005"] | > | ["50006","50006"] | > | ["50007","50007"] | > | ["50008","50008"] | > | ["50009","50009"] | > ++ > {noformat} > I confirmed that the file is written correctly and the issue is in the > parquet reader (already have a fix for it) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-4349) parquet reader returns wrong results when reading a nullable column that starts with a large number of nulls (>30k)
[ https://issues.apache.org/jira/browse/DRILL-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deneche A. Hakim updated DRILL-4349: Assignee: Parth Chandra (was: Deneche A. Hakim) > parquet reader returns wrong results when reading a nullable column that > starts with a large number of nulls (>30k) > --- > > Key: DRILL-4349 > URL: https://issues.apache.org/jira/browse/DRILL-4349 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Parquet >Affects Versions: 1.4.0 >Reporter: Deneche A. Hakim >Assignee: Parth Chandra >Priority: Critical > Fix For: 1.6.0 > > Attachments: drill4349.tar.gz > > > While reading a nullable column, if in a single pass we only read null > values, the parquet reader resets the value of pageReader.readPosInBytes > which will lead to wrong data read from the file. > To reproduce the issue, create a csv file (repro.csv) with 2 columns (id, > val) with 50100 rows, where id equals to the row number and val is empty for > the first 50k rows, and equal to id for the remaining rows. > create a parquet table from the csv file: > {noformat} > CREATE TABLE `repro_parquet` AS SELECT CAST(columns[0] AS INT) AS id, > CAST(NULLIF(columns[1], '') AS DOUBLE) AS val from `repro.csv`; > {noformat} > Now if you query any of the non null values you will get wrong results: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro_parquet` where id>=5 limit > 10; > ++---+ > | id |val| > ++---+ > | 5 | 9.11337776337441E-309 | > | 50001 | 3.26044E-319 | > | 50002 | 1.4916681476489723E-154 | > | 50003 | 2.18890676| > | 50004 | 2.681561588521345E154 | > | 50005 | -2.1016574E-317 | > | 50006 | -1.4916681476489723E-154 | > | 50007 | -2.18890676 | > | 50008 | -2.681561588521345E154| > | 50009 | 2.1016574E-317| > ++---+ > 10 rows selected (0.238 seconds) > {noformat} > and here are the expected values: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro.csv` where cast(columns[0] as > int)>=5 limit 10; > ++ > | columns | > ++ > | ["5","5"] | > | ["50001","50001"] | > | ["50002","50002"] | > | ["50003","50003"] | > | ["50004","50004"] | > | ["50005","50005"] | > | ["50006","50006"] | > | ["50007","50007"] | > | ["50008","50008"] | > | ["50009","50009"] | > ++ > {noformat} > I confirmed that the file is written correctly and the issue is in the > parquet reader (already have a fix for it) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-4349) parquet reader returns wrong results when reading a nullable column that starts with a large number of nulls (>30k)
[ https://issues.apache.org/jira/browse/DRILL-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deneche A. Hakim updated DRILL-4349: Attachment: drill4349.tar.gz attached csv file > parquet reader returns wrong results when reading a nullable column that > starts with a large number of nulls (>30k) > --- > > Key: DRILL-4349 > URL: https://issues.apache.org/jira/browse/DRILL-4349 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Parquet >Affects Versions: 1.4.0 >Reporter: Deneche A. Hakim >Assignee: Deneche A. Hakim >Priority: Critical > Fix For: 1.6.0 > > Attachments: drill4349.tar.gz > > > While reading a nullable column, if in a single pass we only read null > values, the parquet reader resets the value of pageReader.readPosInBytes > which will lead to wrong data read from the file. > To reproduce the issue, create a csv file (repro.csv) with 2 columns (id, > val) with 50100 rows, where id equals to the row number and val is empty for > the first 50k rows, and equal to id for the remaining rows. > create a parquet table from the csv file: > {noformat} > CREATE TABLE `repro_parquet` AS SELECT CAST(columns[0] AS INT) AS id, > CAST(NULLIF(columns[1], '') AS DOUBLE) AS val from `repro.csv`; > {noformat} > Now if you query any of the non null values you will get wrong results: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro_parquet` where id>=5 limit > 10; > ++---+ > | id |val| > ++---+ > | 5 | 9.11337776337441E-309 | > | 50001 | 3.26044E-319 | > | 50002 | 1.4916681476489723E-154 | > | 50003 | 2.18890676| > | 50004 | 2.681561588521345E154 | > | 50005 | -2.1016574E-317 | > | 50006 | -1.4916681476489723E-154 | > | 50007 | -2.18890676 | > | 50008 | -2.681561588521345E154| > | 50009 | 2.1016574E-317| > ++---+ > 10 rows selected (0.238 seconds) > {noformat} > and here are the expected values: > {noformat} > 0: jdbc:drill:zk=local> select * from `repro.csv` where cast(columns[0] as > int)>=5 limit 10; > ++ > | columns | > ++ > | ["5","5"] | > | ["50001","50001"] | > | ["50002","50002"] | > | ["50003","50003"] | > | ["50004","50004"] | > | ["50005","50005"] | > | ["50006","50006"] | > | ["50007","50007"] | > | ["50008","50008"] | > | ["50009","50009"] | > ++ > {noformat} > I confirmed that the file is written correctly and the issue is in the > parquet reader (already have a fix for it) -- This message was sent by Atlassian JIRA (v6.3.4#6332)