[
https://issues.apache.org/jira/browse/HIVE-22670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17529476#comment-17529476
]
Pierre Gramme commented on HIVE-22670:
--------------------------------------
Hi
I encountered the same problem as [~ganeshas]. I was able to narrow it down to
a minimal reproducible example, cf attachment.
My Parquet file is generated with Apache Arrow 7.0.0, using the R API (but I
don't think it is relevant):
{{ arrow::write_parquet(tibble::tibble(x=NA_integer_, y=1:2),
"test-arrow-int-na.parquet")}}
So table has 2 variables x and y, both integers,
||x||y||
|NULL|1|
|NULL|2|
{noformat}
create external table test_parquet_na (x integer, y integer) stored as parquet
location 'hdfs:///path/to/test_parquet_na/';
-- The following works as expected:
set hive.vectorized.execution.enabled=false;
select * from test_parquet_na;
select * from test_parquet_na order by y;
-- This also works:
set hive.vectorized.execution.enabled=true;
select * from test_parquet_na;
-- But this crashes:
set hive.vectorized.execution.enabled=true;
select * from test_parquet_na order by y;
-- => ERROR: same as OP,
-- Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
-- at
org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary.decodeToInt(PlainValuesDictionary.java:251)
-- at
org.apache.hadoop.hive.ql.io.parquet.vector.ParquetDataColumnReaderFactory$DefaultParquetDataColumnReader.readInteger(ParquetDataColumnReaderFactory.java:182)
-- ...{noformat}
Note: I did my tests on a HDP cluster, Apache Hive (version 3.1.0.3.1.5.0-152).
Can't easily test on more recent version, sorry
> ArrayIndexOutOfBoundsException when vectorized reader is used for reading a
> parquet file
> ----------------------------------------------------------------------------------------
>
> Key: HIVE-22670
> URL: https://issues.apache.org/jira/browse/HIVE-22670
> Project: Hive
> Issue Type: Bug
> Components: Parquet, Vectorization
> Affects Versions: 2.3.6, 3.1.2
> Reporter: Ganesha Shreedhara
> Assignee: Ganesha Shreedhara
> Priority: Major
> Attachments: HIVE-22670.1.patch, HIVE-22670.2.patch
>
>
> ArrayIndexOutOfBoundsException is getting thrown while decoding dictionaryIds
> of a row group in parquet file with vectorization enabled.
> *Exception stack trace:*
> {code:java}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
> at
> org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.decodeToBinary(PlainValuesDictionary.java:122)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.ParquetDataColumnReaderFactory$DefaultParquetDataColumnReader.readString(ParquetDataColumnReaderFactory.java:95)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader.decodeDictionaryIds(VectorizedPrimitiveColumnReader.java:467)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader.readBatch(VectorizedPrimitiveColumnReader.java:68)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:410)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> ... 24 more{code}
>
> This issue seems to be caused by re-using the same dictionary column vector
> while reading consecutive row groups. This looks like one of the corner case
> bug which occurs for a certain distribution of dictionary/plain encoded data
> while we read/populate the underlying bit packed dictionary data into a
> column-vector based data structure.
> Similar issue issue was reported in spark (Ref:
> https://issues.apache.org/jira/browse/SPARK-16334)
--
This message was sent by Atlassian Jira
(v8.20.7#820007)