[ 
https://issues.apache.org/jira/browse/HIVE-22670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17071919#comment-17071919
 ] 

Ferdinand Xu commented on HIVE-22670:
-------------------------------------

Agree with [~pvary]. We should keep the inner loop as simple as possible. A lot 
of branch inside the loop will lead to some potential performance drops. Did 
you get a chance in its performance impacts? [~ganeshas]

> ArrayIndexOutOfBoundsException when vectorized reader is used for reading a 
> parquet file
> ----------------------------------------------------------------------------------------
>
>                 Key: HIVE-22670
>                 URL: https://issues.apache.org/jira/browse/HIVE-22670
>             Project: Hive
>          Issue Type: Bug
>          Components: Parquet, Vectorization
>    Affects Versions: 3.1.2, 2.3.6
>            Reporter: Ganesha Shreedhara
>            Assignee: Ganesha Shreedhara
>            Priority: Major
>         Attachments: HIVE-22670.1.patch, HIVE-22670.2.patch
>
>
> ArrayIndexOutOfBoundsException is getting thrown while decoding dictionaryIds 
> of a row group in parquet file with vectorization enabled. 
> *Exception stack trace:*
> {code:java}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
>  at 
> org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.decodeToBinary(PlainValuesDictionary.java:122)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.ParquetDataColumnReaderFactory$DefaultParquetDataColumnReader.readString(ParquetDataColumnReaderFactory.java:95)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader.decodeDictionaryIds(VectorizedPrimitiveColumnReader.java:467)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader.readBatch(VectorizedPrimitiveColumnReader.java:68)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:410)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
>  ... 24 more{code}
>  
> This issue seems to be caused by re-using the same dictionary column vector 
> while reading consecutive row groups. This looks like one of the corner case 
> bug which occurs for a certain distribution of dictionary/plain encoded data 
> while we read/populate the underlying bit packed dictionary data into a 
> column-vector based data structure. 
> Similar issue issue was reported in spark (Ref: 
> https://issues.apache.org/jira/browse/SPARK-16334)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to