[jira] [Closed] (CARBONDATA-3796) Load to table with inverted index configuration fails

2020-05-12 Thread Ajantha Bhat (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajantha Bhat closed CARBONDATA-3796.

Resolution: Duplicate

> Load to table with inverted index configuration fails
> -
>
> Key: CARBONDATA-3796
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3796
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 2.0.0
> Environment: Spark 2.3.2, Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Major
>
> Load to table with inverted index configuration fails
> CREATE TABLE uniqdata_inverted (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,36),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED as carbondata 
> TBLPROPERTIES('inverted_index'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,INTEGER_COLUMN1','sort_columns'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,INTEGER_COLUMN1');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_inverted OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> *Error: java.lang.Exception: DataLoad failure: Error while initializing data 
> handler : Failed for table: uniqdata_inverted in finishing data handler 
> (state=,code=0)*
>  
> *Exception -*
>  
> times, most recent failure: Lost task 0.3 in stage 43.0 (TID 94, vm3, 
> executor 1): 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  Error while initializing data handler : Failed for table: uniqdata_inverted 
> in finishing data handler
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.execute(CarbonRowDataWriterProcessorStepImpl.java:162)
>  at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:51)
>  at 
> org.apache.carbondata.spark.rdd.NewCarbonDataLoadRDD$$anon$1.(NewCarbonDataLoadRDD.scala:160)
>  at 
> org.apache.carbondata.spark.rdd.NewCarbonDataLoadRDD.internalCompute(NewCarbonDataLoadRDD.scala:128)
>  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)
>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>  at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>  at org.apache.spark.scheduler.Task.run(Task.scala:109)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException: 
> Failed for table: uniqdata_inverted in finishing data handler
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.finish(CarbonRowDataWriterProcessorStepImpl.java:243)
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.doExecute(CarbonRowDataWriterProcessorStepImpl.java:221)
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.execute(CarbonRowDataWriterProcessorStepImpl.java:146)
>  ... 12 more
> Caused by: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException: 
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:475)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.finish(CarbonFactDataHandlerColumnar.java:435)
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.finish(CarbonRowDataWriterProcessorStepImpl.java:238)
>  ... 14 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:472)
>  ... 16 more
> Caused by: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar

[jira] [Commented] (CARBONDATA-3796) Load to table with inverted index configuration fails

2020-05-12 Thread Ajantha Bhat (Jira)


[ 
https://issues.apache.org/jira/browse/CARBONDATA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17106010#comment-17106010
 ] 

Ajantha Bhat commented on CARBONDATA-3796:
--

same as *CARBONDATA-3799*

> Load to table with inverted index configuration fails
> -
>
> Key: CARBONDATA-3796
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3796
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 2.0.0
> Environment: Spark 2.3.2, Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Major
>
> Load to table with inverted index configuration fails
> CREATE TABLE uniqdata_inverted (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,36),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED as carbondata 
> TBLPROPERTIES('inverted_index'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,INTEGER_COLUMN1','sort_columns'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,INTEGER_COLUMN1');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_inverted OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> *Error: java.lang.Exception: DataLoad failure: Error while initializing data 
> handler : Failed for table: uniqdata_inverted in finishing data handler 
> (state=,code=0)*
>  
> *Exception -*
>  
> times, most recent failure: Lost task 0.3 in stage 43.0 (TID 94, vm3, 
> executor 1): 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  Error while initializing data handler : Failed for table: uniqdata_inverted 
> in finishing data handler
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.execute(CarbonRowDataWriterProcessorStepImpl.java:162)
>  at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:51)
>  at 
> org.apache.carbondata.spark.rdd.NewCarbonDataLoadRDD$$anon$1.(NewCarbonDataLoadRDD.scala:160)
>  at 
> org.apache.carbondata.spark.rdd.NewCarbonDataLoadRDD.internalCompute(NewCarbonDataLoadRDD.scala:128)
>  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)
>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>  at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>  at org.apache.spark.scheduler.Task.run(Task.scala:109)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException: 
> Failed for table: uniqdata_inverted in finishing data handler
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.finish(CarbonRowDataWriterProcessorStepImpl.java:243)
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.doExecute(CarbonRowDataWriterProcessorStepImpl.java:221)
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.execute(CarbonRowDataWriterProcessorStepImpl.java:146)
>  ... 12 more
> Caused by: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException: 
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:475)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.finish(CarbonFactDataHandlerColumnar.java:435)
>  at 
> org.apache.carbondata.processing.loading.steps.CarbonRowDataWriterProcessorStepImpl.finish(CarbonRowDataWriterProcessorStepImpl.java:238)
>  ... 14 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:472)
>  ... 16 more
> Caused by: 
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>  at 
> org.apache.carbondata.processing.store.CarbonFactDat

[jira] [Resolved] (CARBONDATA-3811) In Flat folder enabled table, it is returning no records while querying.

2020-05-12 Thread Kunal Kapoor (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor resolved CARBONDATA-3811.
--
Fix Version/s: 2.0.0
   Resolution: Fixed

> In Flat folder enabled table, it is returning no records while querying.
> 
>
> Key: CARBONDATA-3811
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3811
> Project: CarbonData
>  Issue Type: Bug
> Environment: opensource ANT cluster
>Reporter: Prasanna Ravichandran
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: Flat_folder_returning_zero.png
>
>
> Flat folder table is retuning no records for select queries.
>  
> Test queries:
> drop table if exists uniqdata1;
> CREATE TABLE uniqdata1 (cust_id int,cust_name String,active_emui_version 
> string, dob timestamp, doj timestamp, bigint_column1 bigint,bigint_column2 
> bigint,decimal_column1 decimal(30,10), decimal_column2 
> decimal(36,36),double_column1 double, double_column2 double,integer_column1 
> int) stored as carbondata TBLPROPERTIES('flat_folder'='true');
> load data inpath 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into 
> table uniqdata1 
> options('fileheader'='cust_id,cust_name,active_emui_version,dob,doj,bigint_column1,bigint_column2,decimal_column1,decimal_column2,double_column1,double_column2,integer_column1','bad_records_action'='force');
> select count(*) from uniqdata1;--0;
> select * from uniqdata1 limit 10;--0;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-3805) Drop index on bloom and lucene index fails

2020-05-12 Thread Akash R Nilugal (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal resolved CARBONDATA-3805.
-
Fix Version/s: 2.0.0
   Resolution: Fixed

> Drop index on bloom and lucene index fails
> --
>
> Key: CARBONDATA-3805
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3805
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 2.0.0
> Environment: Spark 2.3.2, Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Major
> Fix For: 2.0.0
>
>
> Drop index on bloom and lucene index fails
> 0: jdbc:hive2://10.20.255.35:23040/default> create table brinjal_bloom (imei 
> string,AMSize string,channelsId string,ActiveCountry string, Activecity 
> string,gamePointId double,deviceInformationId double,productionDate 
> Timestamp,deliveryDate timestamp,deliverycharge double) STORED as carbondata 
> TBLPROPERTIES('table_blocksize'='1');
>  +---++
> |Result|
> +---++
>  +---++
>  No rows selected (0.261 seconds)
>  0: jdbc:hive2://10.20.255.35:23040/default> LOAD DATA INPATH 
> 'hdfs://hacluster/chetan/vardhandaterestruct.csv' INTO TABLE brinjal_bloom 
> OPTIONS('DELIMITER'=',', 'QUOTECHAR'= 
> '"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'= 
> 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge');
>  +---++
> |Result|
> +---++
>  +---++
>  No rows selected (2.196 seconds)
>  0: jdbc:hive2://10.20.255.35:23040/default> CREATE INDEX dm_brinjal ON TABLE 
> brinjal_bloom(AMSize) as 'bloomfilter' PROPERTIES ('BLOOM_SIZE'='64', 
> 'BLOOM_FPP'='0.1');
>  +---++
> |Result|
> +---++
>  +---++
>  No rows selected (1.039 seconds)
>  0: jdbc:hive2://10.20.255.35:23040/default> drop index dm_brinjal on TABLE 
> brinjal_bloom;
>  *Error: org.apache.carbondata.core.exception.CarbonFileException: Error 
> while setting modified time: (state=,code=0)*
> 0: jdbc:hive2://10.20.255.171:23040/default> CREATE TABLE 
> uniqdata_lucene(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB 
> timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED as carbondata;
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (0.632 seconds)
> 0: jdbc:hive2://10.20.255.171:23040/default> LOAD DATA INPATH 
> 'hdfs://hacluster/chetan/2000_UniqData.csv' into table uniqdata_lucene 
> OPTIONS('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (3.894 seconds)
> 0: jdbc:hive2://10.20.255.171:23040/default> CREATE INDEX dm4 ON TABLE 
> uniqdata_lucene (CUST_NAME) AS 'lucene';
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.518 seconds)
> 0: jdbc:hive2://10.20.255.171:23040/default> drop index dm4 on table 
> uniqdata_lucene;
> *Error: org.apache.carbondata.core.exception.CarbonFileException: Error while 
> setting modified time: (state=,code=0)*
>  
> *Exception -*
> 2020-05-07 20:10:13,865 | ERROR | [HiveServer2-Background-Pool: Thread-358] | 
> Error executing query, currentState RUNNING,  | 
> org.apache.spark.internal.Logging$class.logError(Logging.scala:91)2020-05-07 
> 20:10:13,865 | ERROR | [HiveServer2-Background-Pool: Thread-358] | Error 
> executing query, currentState RUNNING,  | 
> org.apache.spark.internal.Logging$class.logError(Logging.scala:91)org.apache.carbondata.core.exception.CarbonFileException:
>  Error while setting modified time:  at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.setLastModifiedTime(AbstractDFSCarbonFile.java:192)
>  at 
> org.apache.spark.sql.secondaryindex.util.FileInternalUtil$.touchStoreTimeStamp(FileInternalUtil.scala:53)
>  at 
> org.apache.spark.sql.hive.CarbonHiveIndexMetadataUtil$.removeIndexInfoFromParentTable(CarbonHiveIndexMetadataUtil.scala:111)
>  at 
> org.apache.spark.sql.execution.command.index.DropIndexCommand.removeIndexInfoFromParentTable(DropIndexCommand.scala:261)
>  at 
> org.apache.spark.sql.execution.command.index.DropIndexCommand.dropIndex(DropIndexCommand.scala:179)
>  at 
> org.apache.spark.sql.execution.command.index.DropIndexCommand.run(DropIndexCommand.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCo

[jira] [Resolved] (CARBONDATA-3809) Refresh index command fails for secondary index as per syntax mentioned in https://github.com/apache/carbondata/blob/master/docs/index/secondary-index-guide.md

2020-05-12 Thread Akash R Nilugal (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal resolved CARBONDATA-3809.
-
Fix Version/s: 2.0.0
   Resolution: Fixed

> Refresh index command fails for secondary index as per syntax mentioned in 
> https://github.com/apache/carbondata/blob/master/docs/index/secondary-index-guide.md
> ---
>
> Key: CARBONDATA-3809
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3809
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 2.0.0
> Environment: Spark 2.3.2, Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Major
> Fix For: 2.0.0
>
>
> Refresh index command fails for secondary index as per syntax mentioned in 
> [https://github.com/apache/carbondata/blob/master/docs/index/secondary-index-guide.md]
>  
> 0: jdbc:hive2://10.20.255.171:23040/default> create table brinjal (imei 
> string,AMSize string,channelsId string,ActiveCountry string, Activecity 
> string,gamePointId double,deviceInformationId double,productionDate 
> Timestamp,deliveryDate timestamp,deliverycharge double) STORED as carbondata 
> TBLPROPERTIES('table_blocksize'='1');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (0.218 seconds)
> 0: jdbc:hive2://10.20.255.171:23040/default> LOAD DATA INPATH 
> 'hdfs://hacluster/chetan/vardhandaterestruct.csv' INTO TABLE brinjal 
> OPTIONS('DELIMITER'=',', 'QUOTECHAR'= 
> '"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'= 
> 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.31 seconds)
> 0: jdbc:hive2://10.20.255.171:23040/default> CREATE INDEX indextable2 ON 
> TABLE brinjal (AMSize) AS 'carbondata';
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.379 seconds)
> 0: jdbc:hive2://10.20.255.171:23040/default> refresh index indextable2;
> Error: org.apache.spark.sql.AnalysisException: == Parser1: 
> org.apache.spark.sql.parser.CarbonExtensionSpark2SqlParser ==
> [1.26] failure: end of input
> refresh index indextable2
>  ^;
> == Parser2: org.apache.spark.sql.execution.SparkSqlParser ==
> REFRESH statements cannot contain ' ', '\n', '\r', '\t' inside unquoted 
> resource paths(line 1, pos 0)
> == SQL ==
> refresh index indextable2
> ^^^; (state=,code=0)
> 0: jdbc:hive2://10.20.255.171:23040/default> REFRESH INDEX indextable2 WHERE 
> SEGMENT.ID IN(0);
> Error: org.apache.spark.sql.AnalysisException: == Parser1: 
> org.apache.spark.sql.parser.CarbonExtensionSpark2SqlParser ==
> [1.27] failure: identifier matching regex (?i)ON expected
> REFRESH INDEX indextable2 WHERE SEGMENT.ID IN(0)
>  ^;
> == Parser2: org.apache.spark.sql.execution.SparkSqlParser ==
> REFRESH statements cannot contain ' ', '\n', '\r', '\t' inside unquoted 
> resource paths(line 1, pos 0)
> == SQL ==
> REFRESH INDEX indextable2 WHERE SEGMENT.ID IN(0)
> ^^^; (state=,code=0)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-3801) Query on partition table with SI having multiple partiton columns gives empty results

2020-05-12 Thread Akash R Nilugal (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal resolved CARBONDATA-3801.
-
Fix Version/s: 2.0.0
   Resolution: Fixed

> Query on partition table with SI having multiple partiton columns gives empty 
> results
> -
>
> Key: CARBONDATA-3801
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3801
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Indhumathi Muthumurugesh
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-3822) Load Taken time is shown as PT-1.2S and show segment is missing Format

2020-05-12 Thread Kunal Kapoor (Jira)
Kunal Kapoor created CARBONDATA-3822:


 Summary: Load Taken time is shown as PT-1.2S and show segment is 
missing Format  
 Key: CARBONDATA-3822
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3822
 Project: CarbonData
  Issue Type: Task
Reporter: Kunal Kapoor
Assignee: Kunal Kapoor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-3799) inverted index cannot work with adaptive encoding

2020-05-12 Thread Kunal Kapoor (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor resolved CARBONDATA-3799.
--
Resolution: Fixed

> inverted index cannot work with adaptive encoding
> -
>
> Key: CARBONDATA-3799
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3799
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Ajantha Bhat
>Assignee: Ajantha Bhat
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After PR #3638, Inverted index cannot work with adaptive encoding.
> two issues are present
> a) For Byte (Not DirectByteBuffer), Encoded column page has wrong result, as 
> position() is used instead of limit()
> b) For short (DirectByteBuffer), result.array() will fail as it is 
> unsupported for direct byte buffer.
>  
> Solution :
> 1) for problem a) use limit()
> 2) for problem b) write byte by byte. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-3821) Cache database metadata for mv

2020-05-12 Thread Zhi Liu (Jira)
Zhi Liu created CARBONDATA-3821:
---

 Summary: Cache database metadata for mv
 Key: CARBONDATA-3821
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3821
 Project: CarbonData
  Issue Type: Improvement
Reporter: Zhi Liu
Assignee: Zhi Liu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-3820) Support GlobalSort in the CDC

2020-05-12 Thread Xingjun Hao (Jira)
Xingjun Hao created CARBONDATA-3820:
---

 Summary: Support GlobalSort in the CDC
 Key: CARBONDATA-3820
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3820
 Project: CarbonData
  Issue Type: New Feature
Reporter: Xingjun Hao


If there is GloabalSort table in the CDC Flow. The following exception will
be throwed:

Exception in thread "main" java.lang.RuntimeException: column: id specified
in sort columns does not exist in schema
        at
org.apache.carbondata.sdk.file.CarbonWriterBuilder.buildTableSchema(CarbonWriterBuilder.java:828)
        at
org.apache.carbondata.sdk.file.CarbonWriterBuilder.buildCarbonTable(CarbonWriterBuilder.java:794)
        at
org.apache.carbondata.sdk.file.CarbonWriterBuilder.buildLoadModel(CarbonWriterBuilder.java:720)
        at
org.apache.spark.sql.carbondata.execution.datasources.CarbonSparkDataSourceUtil$.prepareLoadModel(CarbonSparkDataSourceUtil.scala:281)
        at
org.apache.spark.sql.carbondata.execution.datasources.SparkCarbonFileFormat.prepareWrite(SparkCarbonFileFormat.scala:141)
        at
org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand.processIUD(CarbonMergeDataSetCommand.scala:269)
        at
org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand.processData(CarbonMergeDataSetCommand.scala:152)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-3814) Remove unused MV events and refactor existing MV events class

2020-05-12 Thread Kunal Kapoor (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor resolved CARBONDATA-3814.
--
Fix Version/s: 2.0.0
   Resolution: Fixed

> Remove unused MV events and refactor existing MV events class
> -
>
> Key: CARBONDATA-3814
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3814
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Remove unused MV events and refactor existing MV events class



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-3819) Fileformat column details is not present in the show segments DDL for heterogenous segments table.

2020-05-12 Thread Prasanna Ravichandran (Jira)
Prasanna Ravichandran created CARBONDATA-3819:
-

 Summary: Fileformat column details is not present in the show 
segments DDL for heterogenous segments table.
 Key: CARBONDATA-3819
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3819
 Project: CarbonData
  Issue Type: Bug
 Environment: Opensource ANT cluster
Reporter: Prasanna Ravichandran
 Attachments: fileformat_notworking_actualresult.PNG, 
fileformat_working_expected.PNG

Fileformat column details is not present in the show segments DDL for 
heterogenous segments table.

Test steps: 
 # Create a heterogenous table with added parquet and carbon segments.
 # DO show segments. 

Expected results:

It should show "FileFormat" column details in show segments DDL.

Actual result: 

It is not showing the File format column details in show segments DDL.

See the attached screenshots for more details.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)