[
https://issues.apache.org/jira/browse/HIVE-21492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17074423#comment-17074423
]
Hive QA commented on HIVE-21492:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12998682/HIVE-21492.4.patch
{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18158 tests
executed
*Failed tests:*
{noformat}
TestJdbcWithMiniLlapArrow - did not produce a TEST-*.xml file (likely timed
out) (batchId=292)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join5]
(batchId=203)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/21418/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21418/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21418/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12998682 - PreCommit-HIVE-Build
> VectorizedParquetRecordReader can't to read parquet file generated using
> thrift/custom tool
> -------------------------------------------------------------------------------------------
>
> Key: HIVE-21492
> URL: https://issues.apache.org/jira/browse/HIVE-21492
> Project: Hive
> Issue Type: Bug
> Reporter: Ganesha Shreedhara
> Assignee: Ganesha Shreedhara
> Priority: Major
> Attachments: HIVE-21492.2.patch, HIVE-21492.3.patch,
> HIVE-21492.4.patch, HIVE-21492.patch
>
>
> Taking an example of a parquet table having array of integers as below.
> {code:java}
> CREATE EXTERNAL TABLE ( list_of_ints` array<int>)
> STORED AS PARQUET
> LOCATION '{location}';
> {code}
> Parquet file generated using hive will have schema for Type as below:
> {code:java}
> group list_of_ints (LIST) { repeated group bag { optional int32 array;\n};\n}
> {code}
> Parquet file generated using thrift or any custom tool (using
> org.apache.parquet.io.api.RecordConsumer)
> may have schema for Type as below:
> {code:java}
> required group list_of_ints (LIST) { repeated int32 list_of_tuple} {code}
> VectorizedParquetRecordReader handles only parquet file generated using hive.
> It throws the following exception when parquet file generated using thrift is
> read because of the changes done as part of HIVE-18553 .
> {code:java}
> Caused by: java.lang.ClassCastException: repeated int32 list_of_ints_tuple is
> not a group
> at org.apache.parquet.schema.Type.asGroupType(Type.java:207)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.getElementType(VectorizedParquetRecordReader.java:479)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:532)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
> at
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365){code}
>
> I have done a small change to handle the case where the child type of group
> type can be PrimitiveType.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)