[jira] [Created] (CARBONDATA-3585) Range Compaction fails in case of KryoSerializer
MANISH NALLA created CARBONDATA-3585: Summary: Range Compaction fails in case of KryoSerializer Key: CARBONDATA-3585 URL: https://issues.apache.org/jira/browse/CARBONDATA-3585 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (CARBONDATA-3574) Delete segment by ID gives results from the deleted parquet segments before clean files
MANISH NALLA created CARBONDATA-3574: Summary: Delete segment by ID gives results from the deleted parquet segments before clean files Key: CARBONDATA-3574 URL: https://issues.apache.org/jira/browse/CARBONDATA-3574 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (CARBONDATA-3567) Added segment datasize and indexsize not correct and added mixed format segment compaction not correct
MANISH NALLA created CARBONDATA-3567: Summary: Added segment datasize and indexsize not correct and added mixed format segment compaction not correct Key: CARBONDATA-3567 URL: https://issues.apache.org/jira/browse/CARBONDATA-3567 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (CARBONDATA-3562) Fix for SDK filter queries not working when schema is given explicitly while Add Segment
MANISH NALLA created CARBONDATA-3562: Summary: Fix for SDK filter queries not working when schema is given explicitly while Add Segment Key: CARBONDATA-3562 URL: https://issues.apache.org/jira/browse/CARBONDATA-3562 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Queries will not return correct result from added segment when the schema is given explicitly in case of SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (CARBONDATA-3560) When format is given in uppercase, add segment does not work
MANISH NALLA created CARBONDATA-3560: Summary: When format is given in uppercase, add segment does not work Key: CARBONDATA-3560 URL: https://issues.apache.org/jira/browse/CARBONDATA-3560 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (CARBONDATA-3507) Create Table As Select Fails in Spark-2.3
MANISH NALLA created CARBONDATA-3507: Summary: Create Table As Select Fails in Spark-2.3 Key: CARBONDATA-3507 URL: https://issues.apache.org/jira/browse/CARBONDATA-3507 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA CTAS fails due to wrong file path -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (CARBONDATA-3502) Select query fails with UDF having Match expression inside IN expression
MANISH NALLA created CARBONDATA-3502: Summary: Select query fails with UDF having Match expression inside IN expression Key: CARBONDATA-3502 URL: https://issues.apache.org/jira/browse/CARBONDATA-3502 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Select query fails with UDF having Match expression inside IN expression and throws ArrayIndexOutOfBounds exception -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (CARBONDATA-3486) Serialization/ deserialization issue with Datatype
MANISH NALLA created CARBONDATA-3486: Summary: Serialization/ deserialization issue with Datatype Key: CARBONDATA-3486 URL: https://issues.apache.org/jira/browse/CARBONDATA-3486 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA When we use old store and do alter add sort columns on it then query on the old segment, serialization/de-serialization issue comes for Filter Column of Measure type which has been changed in Sort Column as it is being de-serialized by ObjectSerialization. This fails the check and the query. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (CARBONDATA-3450) Select query with average function for substring of binary column throws incorrect exception/error
[ https://issues.apache.org/jira/browse/CARBONDATA-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879018#comment-16879018 ] MANISH NALLA commented on CARBONDATA-3450: -- The error is thrown from Spark side, so cannot handle from Carbon > Select query with average function for substring of binary column throws > incorrect exception/error > -- > > Key: CARBONDATA-3450 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3450 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 1.6.0 > Environment: Spark 2.1 >Reporter: Chetan Bhat >Priority: Minor > > Steps : > From Spark beeline user creates a table with binary type and loads data to > table. > CREATE TABLE uniqdata (CUST_ID int,CUST_NAME binary,ACTIVE_EMUI_VERSION > string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 > bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 > decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 > int) STORED BY 'org.apache.carbondata.format' > TBLPROPERTIES('table_blocksize'='2000'); > LOAD DATA inpath 'hdfs://hacluster/chetan/2000_UniqData.csv' into table > uniqdata OPTIONS('DELIMITER'=',' > ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1'); > Select query with average function for substring of binary column is executed. > select > max(substr(CUST_NAME,1,2)),min(substr(CUST_NAME,1,2)),avg(substr(CUST_NAME,1,2)),count(substr(CUST_NAME,1,2)),sum(substr(CUST_NAME,1,2)),variance(substr(CUST_NAME,1,2)) > from uniqdata where CUST_ID IS NULL or DOB IS NOT NULL or BIGINT_COLUMN1 > =1233720368578 or DECIMAL_COLUMN1 = 12345678901.123458 or Double_COLUMN1 > = 1.12345674897976E10 or INTEGER_COLUMN1 IS NULL limit 10; > select > max(substring(CUST_NAME,1,2)),min(substring(CUST_NAME,1,2)),avg(substring(CUST_NAME,1,2)),count(substring(CUST_NAME,1,2)),sum(substring(CUST_NAME,1,2)),variance(substring(CUST_NAME,1,2)) > from uniqdata where CUST_ID IS NULL or DOB IS NOT NULL or BIGINT_COLUMN1 > =1233720368578 or DECIMAL_COLUMN1 = 12345678901.123458 or Double_COLUMN1 > = 1.12345674897976E10 or INTEGER_COLUMN1 IS NULL limit 10; > > 【Actual Output】:Select query with average function for substring of binary > column throws incorrect exception/error > 0: jdbc:hive2://10.18.98.120:22550/default> select > max(substr(CUST_NAME,1,2)),min(substr(CUST_NAME,1,2)),avg(substr(CUST_NAME,1,2)),count(substr(CUST_NAME,1,2)),sum(substr(CUST_NAME,1,2)),variance(substr(CUST_NAME,1,2)) > from uniqdata where CUST_ID IS NULL or DOB IS NOT NULL or BIGINT_COLUMN1 > =1233720368578 or DECIMAL_COLUMN1 = 12345678901.123458 or Double_COLUMN1 > = 1.12345674897976E10 or INTEGER_COLUMN1 IS NULL limit 10; > *Error: org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid > call to name on unresolved object, tree: > unresolvedalias(avg(substring(CUST_NAME#45, 1, 2)), None) (state=,code=0)* > > 【Expected Output】:Select query with average function for substring of binary > column should throw correct error message indicating the type binary cant be > supported. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3449) Initialization of listeners in case of concurrent scenrios is not synchronized
MANISH NALLA created CARBONDATA-3449: Summary: Initialization of listeners in case of concurrent scenrios is not synchronized Key: CARBONDATA-3449 URL: https://issues.apache.org/jira/browse/CARBONDATA-3449 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3445) In Aggregate query, CountStarPlan throws head of empty list error
MANISH NALLA created CARBONDATA-3445: Summary: In Aggregate query, CountStarPlan throws head of empty list error Key: CARBONDATA-3445 URL: https://issues.apache.org/jira/browse/CARBONDATA-3445 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3437) Map Implementation not correct
MANISH NALLA created CARBONDATA-3437: Summary: Map Implementation not correct Key: CARBONDATA-3437 URL: https://issues.apache.org/jira/browse/CARBONDATA-3437 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3437) Map Implementation not correct
[ https://issues.apache.org/jira/browse/CARBONDATA-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3437: - Description: **Insert into map should override the old values in case of new duplicate value, which it was not doing. > Map Implementation not correct > -- > > Key: CARBONDATA-3437 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3437 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Priority: Minor > > **Insert into map should override the old values in case of new duplicate > value, which it was not doing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3432) Range Column compaction sending all the splits to all the executors one by one
MANISH NALLA created CARBONDATA-3432: Summary: Range Column compaction sending all the splits to all the executors one by one Key: CARBONDATA-3432 URL: https://issues.apache.org/jira/browse/CARBONDATA-3432 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3432) Range Column compaction sending all the splits to all the executors one by one
[ https://issues.apache.org/jira/browse/CARBONDATA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3432: - Description: h1. Range Column compaction sending all the splits to all the executors one by one, instead can broadcast at once to all executors > Range Column compaction sending all the splits to all the executors one by one > -- > > Key: CARBONDATA-3432 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3432 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Priority: Minor > > h1. Range Column compaction sending all the splits to all the executors one > by one, instead can broadcast at once to all executors -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3419) Desc Formatted not showing Range Column
MANISH NALLA created CARBONDATA-3419: Summary: Desc Formatted not showing Range Column Key: CARBONDATA-3419 URL: https://issues.apache.org/jira/browse/CARBONDATA-3419 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3417) Load time degrade for Range column due to cores configured
MANISH NALLA created CARBONDATA-3417: Summary: Load time degrade for Range column due to cores configured Key: CARBONDATA-3417 URL: https://issues.apache.org/jira/browse/CARBONDATA-3417 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3396) Range Compaction Data mismatch
MANISH NALLA created CARBONDATA-3396: Summary: Range Compaction Data mismatch Key: CARBONDATA-3396 URL: https://issues.apache.org/jira/browse/CARBONDATA-3396 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3377) String Type Column with huge strings and null values fails Range Compaction
MANISH NALLA created CARBONDATA-3377: Summary: String Type Column with huge strings and null values fails Range Compaction Key: CARBONDATA-3377 URL: https://issues.apache.org/jira/browse/CARBONDATA-3377 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3377) String Type Column with huge strings and null values fails Range Compaction
[ https://issues.apache.org/jira/browse/CARBONDATA-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3377: - Description: h1. String Type Column with huge strings and null values fails giving NullPointerException when it is a range column and compaction is done. > String Type Column with huge strings and null values fails Range Compaction > --- > > Key: CARBONDATA-3377 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3377 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Priority: Minor > > h1. String Type Column with huge strings and null values fails giving > NullPointerException when it is a range column and compaction is done. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3376) Table containing Range Column as Partition Column fails Compaction
MANISH NALLA created CARBONDATA-3376: Summary: Table containing Range Column as Partition Column fails Compaction Key: CARBONDATA-3376 URL: https://issues.apache.org/jira/browse/CARBONDATA-3376 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA When the range col is given as partitioned by column then compaction is failed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3375) GC Overhead limit exceeded error for huge data in Range Compaction
[ https://issues.apache.org/jira/browse/CARBONDATA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3375: - Summary: GC Overhead limit exceeded error for huge data in Range Compaction (was: GC Overhead limit exceeded error for huge data) > GC Overhead limit exceeded error for huge data in Range Compaction > -- > > Key: CARBONDATA-3375 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3375 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Priority: Minor > Time Spent: 40m > Remaining Estimate: 0h > > When only single data item is present then it will be launched as one single > task wich results in one executor getting overloaded. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3375) GC Overhead limit exceeded error for huge data
MANISH NALLA created CARBONDATA-3375: Summary: GC Overhead limit exceeded error for huge data Key: CARBONDATA-3375 URL: https://issues.apache.org/jira/browse/CARBONDATA-3375 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA When only single data item is present then it will be launched as one single task wich results in one executor getting overloaded. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3343) Support Compaction for Range Sort
[ https://issues.apache.org/jira/browse/CARBONDATA-3343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3343: - Attachment: Support Compaction for Range.docx > Support Compaction for Range Sort > - > > Key: CARBONDATA-3343 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3343 > Project: CarbonData > Issue Type: Improvement >Reporter: MANISH NALLA >Priority: Major > Attachments: Support Compaction for Range.docx > > > CarbonData supports Compaction for all sort scopes based on their > taskIds, i.e, we group the partitions(carbondata files) of different > segments which have the same taskId to one task and then compact. But this > would not be the correct way to handle the compaction in the case of Range > Sort where we have data divided into different ranges for different > segments. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3343) Support Compaction for Range Sort
MANISH NALLA created CARBONDATA-3343: Summary: Support Compaction for Range Sort Key: CARBONDATA-3343 URL: https://issues.apache.org/jira/browse/CARBONDATA-3343 Project: CarbonData Issue Type: Improvement Reporter: MANISH NALLA CarbonData supports Compaction for all sort scopes based on their taskIds, i.e, we group the partitions(carbondata files) of different segments which have the same taskId to one task and then compact. But this would not be the correct way to handle the compaction in the case of Range Sort where we have data divided into different ranges for different segments. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3315) Range Filter query with two between clauses with an OR gives wrong results
[ https://issues.apache.org/jira/browse/CARBONDATA-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3315: - Summary: Range Filter query with two between clauses with an OR gives wrong results (was: Range Filter query with two between clauses gives wrong results) > Range Filter query with two between clauses with an OR gives wrong results > -- > > Key: CARBONDATA-3315 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3315 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Priority: Major > > # Create table t1(c1 string, c2 int) stored by 'carbondata' > tblproperties('sort_columns'='c2') > # insert some values into table t1{color:#008000} > {color} > # select * from t1 where c2 between 2 and 3 or c2 between 3 and 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3315) Range Filter query with two between clauses gives wrong results
MANISH NALLA created CARBONDATA-3315: Summary: Range Filter query with two between clauses gives wrong results Key: CARBONDATA-3315 URL: https://issues.apache.org/jira/browse/CARBONDATA-3315 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA # Create table t1(c1 string, c2 int) stored by 'carbondata' tblproperties('sort_columns'='c2') # insert some values into table t1{color:#008000} {color} # select * from t1 where c2 between 2 and 3 or c2 between 3 and 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3268) Query on Varchar showing as Null in Presto
MANISH NALLA created CARBONDATA-3268: Summary: Query on Varchar showing as Null in Presto Key: CARBONDATA-3268 URL: https://issues.apache.org/jira/browse/CARBONDATA-3268 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3259) Documentation Update
MANISH NALLA created CARBONDATA-3259: Summary: Documentation Update Key: CARBONDATA-3259 URL: https://issues.apache.org/jira/browse/CARBONDATA-3259 Project: CarbonData Issue Type: Sub-task Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3257) Data Load is in No sort flowwhen version is upgraded even if sort columns are given. Also describe formatted displays wrong sort scope after refresh.
MANISH NALLA created CARBONDATA-3257: Summary: Data Load is in No sort flowwhen version is upgraded even if sort columns are given. Also describe formatted displays wrong sort scope after refresh. Key: CARBONDATA-3257 URL: https://issues.apache.org/jira/browse/CARBONDATA-3257 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3236) JVM Crash for insert into new table from old table
MANISH NALLA created CARBONDATA-3236: Summary: JVM Crash for insert into new table from old table Key: CARBONDATA-3236 URL: https://issues.apache.org/jira/browse/CARBONDATA-3236 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3223) Datasize and Indexsize showing 0B for 1.1 store when show segments is done
[ https://issues.apache.org/jira/browse/CARBONDATA-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3223: - Description: # Create table and load in 1.1 store. # Refresh and Load in 1.5.1 version. # Show Segments on the table will give 0B for the older segment. > Datasize and Indexsize showing 0B for 1.1 store when show segments is done > -- > > Key: CARBONDATA-3223 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3223 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Assignee: MANISH NALLA >Priority: Minor > > # Create table and load in 1.1 store. > # Refresh and Load in 1.5.1 version. > # Show Segments on the table will give 0B for the older segment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3223) Datasize and Indexsize showing 0B for 1.1 store when show segments is done
MANISH NALLA created CARBONDATA-3223: Summary: Datasize and Indexsize showing 0B for 1.1 store when show segments is done Key: CARBONDATA-3223 URL: https://issues.apache.org/jira/browse/CARBONDATA-3223 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3203) Compaction failing for table which is retstructured
[ https://issues.apache.org/jira/browse/CARBONDATA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3203: - Description: Steps to reproduce: # Create table with complex and primitive types. # Load data 2-3 times. # Drop one column. # Trigger Compaction. > Compaction failing for table which is retstructured > --- > > Key: CARBONDATA-3203 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3203 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Assignee: MANISH NALLA >Priority: Minor > > Steps to reproduce: > # Create table with complex and primitive types. > # Load data 2-3 times. > # Drop one column. > # Trigger Compaction. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3203) Compaction failing for table which is retstructured
MANISH NALLA created CARBONDATA-3203: Summary: Compaction failing for table which is retstructured Key: CARBONDATA-3203 URL: https://issues.apache.org/jira/browse/CARBONDATA-3203 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3196) Compaction Failing for Complex datatypes with Dictionary Include
[ https://issues.apache.org/jira/browse/CARBONDATA-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3196: - Description: Steps to reproduce: # Create Table with Complex type and Dictionary Include Complex type. # Load data into the table 2-3 times. # Alter table compact 'major' > Compaction Failing for Complex datatypes with Dictionary Include > > > Key: CARBONDATA-3196 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3196 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Assignee: MANISH NALLA >Priority: Minor > > Steps to reproduce: > # Create Table with Complex type and Dictionary Include Complex type. > # Load data into the table 2-3 times. > # Alter table compact 'major' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3196) Compaction Failing for Complex datatypes with Dictionary Include
MANISH NALLA created CARBONDATA-3196: Summary: Compaction Failing for Complex datatypes with Dictionary Include Key: CARBONDATA-3196 URL: https://issues.apache.org/jira/browse/CARBONDATA-3196 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3192) Compaction Compatibilty Failure
[ https://issues.apache.org/jira/browse/CARBONDATA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3192: - Description: Table Created, Loaded and Altered(Column added) in 1.5.1 version and Refreshed, Altered(Added Column dropped) , Loaded and Compacted with Varchar Columns in new version giving error. (was: Table Created, Loaded and Altered(Column added) in 1.5.1 version and Refreshed, Altered(Added Column dropped) , Loaded and Compacted Giving error.) > Compaction Compatibilty Failure > --- > > Key: CARBONDATA-3192 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3192 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Assignee: MANISH NALLA >Priority: Minor > > Table Created, Loaded and Altered(Column added) in 1.5.1 version and > Refreshed, Altered(Added Column dropped) , Loaded and Compacted with Varchar > Columns in new version giving error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3192) Compaction Compatibilty Failure
MANISH NALLA created CARBONDATA-3192: Summary: Compaction Compatibilty Failure Key: CARBONDATA-3192 URL: https://issues.apache.org/jira/browse/CARBONDATA-3192 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA Table Created, Loaded and Altered(Column added) in 1.5.1 version and Refreshed, Altered(Added Column dropped) , Loaded and Compacted Giving error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3187) Global Dictionary Support for Complex Map
MANISH NALLA created CARBONDATA-3187: Summary: Global Dictionary Support for Complex Map Key: CARBONDATA-3187 URL: https://issues.apache.org/jira/browse/CARBONDATA-3187 Project: CarbonData Issue Type: Task Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3182) Fix SDV TestCase Failures in Delimiters
MANISH NALLA created CARBONDATA-3182: Summary: Fix SDV TestCase Failures in Delimiters Key: CARBONDATA-3182 URL: https://issues.apache.org/jira/browse/CARBONDATA-3182 Project: CarbonData Issue Type: Sub-task Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CARBONDATA-3178) select query with in clause on timestamp column inconsistent with filter on same column
[ https://issues.apache.org/jira/browse/CARBONDATA-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724738#comment-16724738 ] MANISH NALLA commented on CARBONDATA-3178: -- The behaviour is same as Hive. Please check. > select query with in clause on timestamp column inconsistent with filter on > same column > --- > > Key: CARBONDATA-3178 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3178 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 1.5.1 > Environment: spark 2.2 >Reporter: Anshul Topnani >Priority: Minor > > Steps : > Create table : > CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION > string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 > bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 > decimal(36,36),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 > int) STORED BY 'org.apache.carbondata.format' ; > Load Data : > LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table > uniqdata OPTIONS('DELIMITER'=',' , > 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1'); > Select Queries: > select * from uniqdata where dob in ('1970-01-01 01:00:03.0'); > +--++--+--+--+-+-+--+--+-+-+--+--+ > | cust_id | cust_name | active_emui_version | dob | doj | bigint_column1 | > bigint_column2 | decimal_column1 | decimal_column2 | double_column1 | > double_column2 | integer_column1 | > +--++--+--+--+-+-+--+--+-+-+--+--+ > +--++--+--+--+-+-+--+--+-+-+--+--+ > No rows selected (0.702 seconds) > select * from uniqdata where dob ='1970-01-01 01:00:03.0'; > +--+--++++-+-+-+--+--+---+--+--+ > | cust_id | cust_name | active_emui_version | dob | doj | bigint_column1 | > bigint_column2 | decimal_column1 | decimal_column2 | double_column1 | > double_column2 | integer_column1 | > +--+--++++-+-+-+--+--+---+--+--+ > | 9000 | CUST_NAME_0 | ACTIVE_EMUI_VERSION_0 | 1970-01-01 01:00:03.0 > | 1970-01-01 02:00:03.0 | 123372036854 | -223372036854 | > 12345678901.123400 | NULL | 1.12345674897976E10 | -1.12345674897976E10 | > 1 | > +--+--++++-+-+-+--+--+---+--+--+ > 1 row selected (0.57 seconds) > > Actual Issue : > Correct data is projected in case of filter query with '='. For same column > with in clause, no data is projected. > Expected : > Both the select queries should show correct result. (As projected in second > select query). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3179) DataLoad Failure in Map Data Type
MANISH NALLA created CARBONDATA-3179: Summary: DataLoad Failure in Map Data Type Key: CARBONDATA-3179 URL: https://issues.apache.org/jira/browse/CARBONDATA-3179 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA Data Load failing for insert into table select * from table containing Map datatype -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3153) Change of Complex Delimiters
MANISH NALLA created CARBONDATA-3153: Summary: Change of Complex Delimiters Key: CARBONDATA-3153 URL: https://issues.apache.org/jira/browse/CARBONDATA-3153 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-3098) Negative value exponents giving wrong results
[ https://issues.apache.org/jira/browse/CARBONDATA-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-3098: - Description: Problem: When the value of exponent is a negative number then the data is incorrect due to loss of precision of Floating point values and wrong calculation of the count of decimal points. Steps to reproduce: -> "create table float_c(f float) using carbon" -> "insert into float_c select '1.4E-38' " > Negative value exponents giving wrong results > - > > Key: CARBONDATA-3098 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3098 > Project: CarbonData > Issue Type: Bug >Reporter: MANISH NALLA >Priority: Major > Time Spent: 1h 40m > Remaining Estimate: 0h > > Problem: When the value of exponent is a negative number then the data is > incorrect due to loss of precision of Floating point values and wrong > calculation of the count of decimal points. > > Steps to reproduce: > -> "create table float_c(f float) using carbon" > -> "insert into float_c select '1.4E-38' " -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3098) Negative value exponents giving wrong results
MANISH NALLA created CARBONDATA-3098: Summary: Negative value exponents giving wrong results Key: CARBONDATA-3098 URL: https://issues.apache.org/jira/browse/CARBONDATA-3098 Project: CarbonData Issue Type: Bug Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-3017) Create DDL Support for Map Type
MANISH NALLA created CARBONDATA-3017: Summary: Create DDL Support for Map Type Key: CARBONDATA-3017 URL: https://issues.apache.org/jira/browse/CARBONDATA-3017 Project: CarbonData Issue Type: Sub-task Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-2960) SDK reader not working without projection columns
[ https://issues.apache.org/jira/browse/CARBONDATA-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-2960: - Description: (was: /* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.carbondata.spark.testsuite.createTable.TestCreateDDLForComplexMapType import org.apache.hadoop.conf.Configuration import org.apache.spark.sql.test.util.QueryTest import org.scalatest.BeforeAndAfterAll class TestCreateDDLForComplexMapType extends QueryTest with BeforeAndAfterAll{ private val conf: Configuration = new Configuration(false) override def beforeAll(): Unit = { sql("DROP TABLE IF EXISTS carbon") } test("Single Map One Level") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map")) } test("Single Map One Level 2") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() sql("insert into carbon values('1:Nalla%2:Singh%1:Gupta%5000:Kumar')") sql("select * from carbon").show(false) // assert(desc(0).get(1).asInstanceOf[String].trim.equals("map")) } test("Single Map with Two Nested Level") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>")) } test("Map Type with array type as value") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>")) } test("Map Type with struct type as value") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>")) } test("Map Type as child to struct type") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField struct> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("struct>")) } test("Map Type as child to array type") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField array> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("array>")) } test("Map Type as child to array type") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField array>> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("array>>")) } test("3 levels") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField array> | ) | STORED BY 'carbondata' """.stripMargin ) /*val desc =*/ sql( s""" | Describe Formatted | carbon """.stripMargin ).show()/*.collect()*/ sql("INSERT into carbon values('1:3$2:3$4:3')") // assert(desc(0).get(1).asInstanceOf[String].trim.equals("array>>")) } test("Test Load data in map") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() sql("insert into carbon
[jira] [Updated] (CARBONDATA-2960) SDK reader not working without projection columns
[ https://issues.apache.org/jira/browse/CARBONDATA-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MANISH NALLA updated CARBONDATA-2960: - Description: /* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.carbondata.spark.testsuite.createTable.TestCreateDDLForComplexMapType import org.apache.hadoop.conf.Configuration import org.apache.spark.sql.test.util.QueryTest import org.scalatest.BeforeAndAfterAll class TestCreateDDLForComplexMapType extends QueryTest with BeforeAndAfterAll{ private val conf: Configuration = new Configuration(false) override def beforeAll(): Unit = { sql("DROP TABLE IF EXISTS carbon") } test("Single Map One Level") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map")) } test("Single Map One Level 2") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() sql("insert into carbon values('1:Nalla%2:Singh%1:Gupta%5000:Kumar')") sql("select * from carbon").show(false) // assert(desc(0).get(1).asInstanceOf[String].trim.equals("map")) } test("Single Map with Two Nested Level") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>")) } test("Map Type with array type as value") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>")) } test("Map Type with struct type as value") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>")) } test("Map Type as child to struct type") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField struct> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("struct>")) } test("Map Type as child to array type") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField array> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("array>")) } test("Map Type as child to array type") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField array>> | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() assert(desc(0).get(1).asInstanceOf[String].trim.equals("array>>")) } test("3 levels") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField array> | ) | STORED BY 'carbondata' """.stripMargin ) /*val desc =*/ sql( s""" | Describe Formatted | carbon """.stripMargin ).show()/*.collect()*/ sql("INSERT into carbon values('1:3$2:3$4:3')") // assert(desc(0).get(1).asInstanceOf[String].trim.equals("array>>")) } test("Test Load data in map") { sql("DROP TABLE IF EXISTS carbon") sql( s""" | CREATE TABLE carbon( | mapField map | ) | STORED BY 'carbondata' """.stripMargin ) val desc = sql( s""" | Describe Formatted | carbon """.stripMargin ).collect() sql("insert into carbon
[jira] [Created] (CARBONDATA-2972) Debug Logs and a function for type of Adaptive Encoding
MANISH NALLA created CARBONDATA-2972: Summary: Debug Logs and a function for type of Adaptive Encoding Key: CARBONDATA-2972 URL: https://issues.apache.org/jira/browse/CARBONDATA-2972 Project: CarbonData Issue Type: Improvement Reporter: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-2960) SDK reader not working without projection columns
MANISH NALLA created CARBONDATA-2960: Summary: SDK reader not working without projection columns Key: CARBONDATA-2960 URL: https://issues.apache.org/jira/browse/CARBONDATA-2960 Project: CarbonData Issue Type: Improvement Reporter: MANISH NALLA Assignee: MANISH NALLA -- This message was sent by Atlassian JIRA (v7.6.3#76005)