[
https://issues.apache.org/jira/browse/HIVE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014686#comment-15014686
]
Prasanth Jayachandran commented on HIVE-12472:
----------------------------------------------
[~ashutoshc] Can you take a look at this one? This just adds a test case to
already fixed bug.
> Add test case for HIVE-10592
> ----------------------------
>
> Key: HIVE-12472
> URL: https://issues.apache.org/jira/browse/HIVE-12472
> Project: Hive
> Issue Type: Bug
> Affects Versions: 1.3.0, 2.0.0
> Reporter: Prasanth Jayachandran
> Assignee: Prasanth Jayachandran
> Attachments: HIVE-12472.patch
>
>
> HIVE-10592 has a fix for the following NPE issue (table should have all
> columns values as null for timestamp and date columns)
> {code:title=query}
> set hive.optimize.index.filter=true;
> select count(*) from orctable where timestamp_col is null;
> select count(*) from orctable where date_col is null;
> {code}
> {code:title=exception}
> Caused by: java.lang.NullPointerException
> at
> org.apache.hadoop.hive.ql.io.orc.ColumnStatisticsImpl$TimestampStatisticsImpl.getMinimum(ColumnStatisticsImpl.java:845)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.getMin(RecordReaderImpl.java:308)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.evaluatePredicateProto(RecordReaderImpl.java:332)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$SargApplier.pickRowGroups(RecordReaderImpl.java:710)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.pickRowGroups(RecordReaderImpl.java:751)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:777)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:986)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1019)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:205)
> at
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:598)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.<init>(OrcRawRecordMerger.java:183)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$OriginalReaderPair.<init>(OrcRawRecordMerger.java:226)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:437)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1235)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1117)
> at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:249)
> ... 26 more
> ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1
> killedTasks:1, Vertex vertex_1446768202865_0008_5_00 [Map 1] killed/failed
> due to:OWN_TASK_FAILURE]DAG did not succeed due to VERTEX_FAILURE.
> failedVertices:1 killedVertices:0
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)