[ https://issues.apache.org/jira/browse/HIVE-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13890590#comment-13890590 ]
Hive QA commented on HIVE-6320: ------------------------------- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12626832/HIVE-6320.3.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4997 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.common.type.TestDecimal128.testHighPrecisionDecimal128Multiply {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1174/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1174/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12626832 > Row-based ORC reader with PPD turned on dies on BufferUnderFlowException > ------------------------------------------------------------------------- > > Key: HIVE-6320 > URL: https://issues.apache.org/jira/browse/HIVE-6320 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers > Affects Versions: 0.13.0 > Reporter: Gopal V > Assignee: Prasanth J > Labels: orcfile > Attachments: HIVE-6320.1.patch, HIVE-6320.2.patch, HIVE-6320.2.patch, > HIVE-6320.3.patch > > > ORC data reader crashes out on a BufferUnderflowException, while trying to > read data row-by-row with the predicate push-down enabled on current trunk. > {code} > Caused by: java.nio.BufferUnderflowException > at java.nio.Buffer.nextGetIndex(Buffer.java:472) > at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:117) > at > org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:207) > at > org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:240) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:53) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:288) > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$IntTreeReader.next(RecordReaderImpl.java:510) > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1581) > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2707) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:125) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:101) > {code} > The query run is > {code} > set hive.vectorized.execution.enabled=false; > set hive.optimize.index.filter=true; > insert overwrite directory '/tmp/foo' select * from lineitem where l_orderkey > is not null; > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)