[
https://issues.apache.org/jira/browse/CARBONDATA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299650#comment-16299650
]
Sangeeta Gulia commented on CARBONDATA-1758:
--------------------------------------------
[~chetdb] This is the result of my query after executing the entire sequence of
queries you have mentioned.
0: jdbc:hive2://hadoop-master:10000> Select CUST_ID from uniqdata_DI_int
where CUST_ID is null;
+----------+--+
| CUST_ID |
+----------+--+
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
+----------+--+
26 rows selected (0.408 seconds)
0: jdbc:hive2://hadoop-master:10000>
> Carbon1.3.0- No Inverted Index : Select column with is null for
> no_inverted_index column throws java.lang.ArrayIndexOutOfBoundsException
> ----------------------------------------------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-1758
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1758
> Project: CarbonData
> Issue Type: Bug
> Components: data-query
> Affects Versions: 1.3.0
> Environment: 3 node cluster
> Reporter: Chetan Bhat
> Labels: Functional
>
> Steps :
> In Beeline user executes the queries in sequence.
> CREATE TABLE uniqdata_DI_int (CUST_ID int,CUST_NAME
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp,
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10),
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'
> TBLPROPERTIES('DICTIONARY_INCLUDE'='cust_id','NO_INVERTED_INDEX'='cust_id');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/3000_UniqData.csv' into table
> uniqdata_DI_int OPTIONS('DELIMITER'=',',
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> Select count(CUST_ID) from uniqdata_DI_int;
> Select count(CUST_ID)*10 as multiple from uniqdata_DI_int;
> Select avg(CUST_ID) as average from uniqdata_DI_int;
> Select floor(CUST_ID) as average from uniqdata_DI_int;
> Select ceil(CUST_ID) as average from uniqdata_DI_int;
> Select ceiling(CUST_ID) as average from uniqdata_DI_int;
> Select CUST_ID*integer_column1 as multiple from uniqdata_DI_int;
> Select CUST_ID from uniqdata_DI_int where CUST_ID is null;
> *Issue : Select column with is null for no_inverted_index column throws
> java.lang.ArrayIndexOutOfBoundsException*
> 0: jdbc:hive2://10.18.98.34:23040> Select CUST_ID from uniqdata_DI_int where
> CUST_ID is null;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure:
> Task 0 in stage 79.0 failed 4 times, most recent failure: Lost task 0.3 in
> stage 79.0 (TID 123, BLR1000014278, executor 18):
> org.apache.spark.util.TaskCompletionListenerException:
> java.util.concurrent.ExecutionException:
> java.lang.ArrayIndexOutOfBoundsException: 0
> at
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
> at org.apache.spark.scheduler.Task.run(Task.scala:112)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace: (state=,code=0)
> Expected : Select column with is null for no_inverted_index column should be
> successful displaying the correct result set.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)