[jira] [Created] (CARBONDATA-1717) remove sc broadcast to get hadoop configuration

2017-11-15 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1717:
---

 Summary: remove sc broadcast to get hadoop configuration
 Key: CARBONDATA-1717
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1717
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1809) Add Create Table Event

2017-11-25 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1809:
---

 Summary: Add Create Table Event
 Key: CARBONDATA-1809
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1809
 Project: CarbonData
  Issue Type: Improvement
Reporter: Akash R Nilugal
Priority: Minor


Add Create Table Event



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1789) Carbon1.3.0 Concurrent Load-Drop: user is able to drop table even if insert/load job is running

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1789:
---

Assignee: Akash R Nilugal

> Carbon1.3.0 Concurrent Load-Drop: user is able to drop table even if 
> insert/load job is running
> ---
>
> Key: CARBONDATA-1789
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1789
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Assignee: Akash R Nilugal
>  Labels: dfx
> Fix For: 1.3.0
>
>
> Carbon1.3.0 Concurrent Load-Drop: user is able to drop table even if 
> insert/load job is running
> Steps:
> 1:  Create a table
> 2: Start a insert job
> 3: Concurrently drop the table
> 4: Observe that drop is success
> 5: Observe that insert job is running and after some times job fails
> Expected behvaiour: drop job should wait for insert job to complete



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1910) do not allow tupleid, referenceid and positionReference as columns names

2017-12-19 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1910:
---

 Summary: do not allow tupleid, referenceid and positionReference 
as columns names
 Key: CARBONDATA-1910
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1910
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


do not allow tupleid, referenceid and positionReference as columns names, when 
it is created with these keywords and if those columns are tried to delete, 
then error is thrown



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1915) In the insert into and the update flow when static values are inserted then the preferred locations are coming empty

2017-12-19 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1915:
---

 Summary: In the insert into and the update flow when static values 
are inserted then the preferred locations are coming empty
 Key: CARBONDATA-1915
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1915
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


【Test step】:
CREATE TABLE carbon_01(imei string,age int,task bigint,num double,level 
decimal(10,3),productdate timestamp)STORED BY 'org.apache.carbondata.format';
CREATE TABLE carbon_02(imei string,age int,task bigint,num double,level 
decimal(10,3),productdate timestamp,name string,point int)STORED BY 
'org.apache.carbondata.format';
LOAD DATA INPATH 'hdfs://hacluster/mytest/moredata01.csv'  INTO TABLE carbon_02 
options ('DELIMITER'=',', 'QUOTECHAR'='"','FILEHEADER' = 
'imei,age,task,num,level,productdate,name,point');
insert into carbon_01 select imei,age,task,num,level,productdate from carbon_02 
where age is not NULL;
show segments for table carbon_01;
select * from carbon_01;
update carbon_01 set (imei) = ("RNG") where age <=0;
select * from carbon_01;
update carbon_01 set (imei) = ("SSG") where num in (15.5);
select * from carbon_01;
delete from carbon_01 where imei IN ('RNG','SSG');
select * from carbon_01;

result:update the table,then query table sometimes need long time to output 
result,or sometimes failed,if query failed,the JDBCServer master/standby will 
be changed

update table then query table should working




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1916) Correct the database location path during carbon drop databsae

2017-12-19 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1916:
---

 Summary: Correct the database location path during carbon drop 
databsae
 Key: CARBONDATA-1916
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1916
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


when drop database is called, to delete the databsae directory, the path formed 
is wrong, so when drop datasbe is executed, operation is successful , but the 
database directory is still present in hdfs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1915) In the insert into and the update flow when static values are inserted then the preferred locations are coming empty

2017-12-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal closed CARBONDATA-1915.
---
Resolution: Fixed

> In the insert into and the update flow when static values are inserted then 
> the preferred locations are coming empty
> 
>
> Key: CARBONDATA-1915
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1915
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 【Test step】:
> CREATE TABLE carbon_01(imei string,age int,task bigint,num double,level 
> decimal(10,3),productdate timestamp)STORED BY 'org.apache.carbondata.format';
> CREATE TABLE carbon_02(imei string,age int,task bigint,num double,level 
> decimal(10,3),productdate timestamp,name string,point int)STORED BY 
> 'org.apache.carbondata.format';
> LOAD DATA INPATH 'hdfs://hacluster/mytest/moredata01.csv'  INTO TABLE 
> carbon_02 options ('DELIMITER'=',', 'QUOTECHAR'='"','FILEHEADER' = 
> 'imei,age,task,num,level,productdate,name,point');
> insert into carbon_01 select imei,age,task,num,level,productdate from 
> carbon_02 where age is not NULL;
> show segments for table carbon_01;
> select * from carbon_01;
> update carbon_01 set (imei) = ("RNG") where age <=0;
> select * from carbon_01;
> update carbon_01 set (imei) = ("SSG") where num in (15.5);
> select * from carbon_01;
> delete from carbon_01 where imei IN ('RNG','SSG');
> select * from carbon_01;
> result:update the table,then query table sometimes need long time to output 
> result,or sometimes failed,if query failed,the JDBCServer master/standby will 
> be changed
> update table then query table should working



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1876) clean all the InProgress segments for all databases during session initialization

2017-12-07 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1876:
---

 Summary: clean all the InProgress segments for all databases 
during session initialization
 Key: CARBONDATA-1876
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1876
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


clean all the InProgress segments for all databases during session 
initialization. when carbon session initialize , clean all the in progress 
segments for all the databases created by user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1759) (Carbon1.3.0 - Clean Files) Clean command is not working correctly for segments marked for delete due to insert overwrite job

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1759:
---

Assignee: Akash R Nilugal

> (Carbon1.3.0 - Clean Files) Clean command is not working correctly for  
> segments marked for delete due to insert overwrite job
> --
>
> Key: CARBONDATA-1759
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1759
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Assignee: Akash R Nilugal
>  Labels: dfx
>
> Carbon1.3.0  Clean command is not working correctly for  segments marked for 
> delete due to insert overwrite job.
> 1: Create a table
> CREATE TABLE IF NOT EXISTS flow_carbon_new999(txn_dte String,dt String,txn_bk 
> String,txn_br String,own_bk String,own_br String,opp_bk String,bus_opr_cde 
> String,opt_prd_cde String,cus_no String,cus_ac String,opp_ac_nme  
> String,opp_ac String,bv_no  String,aco_ac String,ac_dte String,txn_cnt 
> int,jrn_par int,mfm_jrn_no String,cbn_jrn_no String,ibs_jrn_no String,vch_no 
> String,vch_seq String,srv_cde String,bus_cd_no  String,id_flg String,bv_cde 
> String,txn_time  String,txn_tlr String,ety_tlr String,ety_bk String,ety_br 
> String,bus_pss_no String,chk_flg String,chk_tlr String,chk_jrn_no String,  
> bus_sys_no String,txn_sub_cde String,fin_bus_cde String,fin_bus_sub_cde 
> String,chl  String,tml_id String,sus_no String,sus_seq String,  cho_seq 
> String,  itm_itm String,itm_sub String,itm_sss String,dc_flg String,amt  
> decimal(15,2),bal  decimal(15,2),ccy  String,spv_flg String,vch_vld_dte 
> String,pst_bk String,pst_br String,ec_flg String,aco_tlr String,gen_flg 
> String,his_rec_sum_flg String,his_flg String,vch_typ String,val_dte 
> String,opp_ac_flg String,cmb_flg String,ass_vch_flg String,cus_pps_flg 
> String,bus_rmk_cde String,vch_bus_rmk String,tec_rmk_cde String,vch_tec_rmk 
> String,gems_last_upd_d String,maps_date String,maps_job String)STORED BY 
> 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='txn_cnt,jrn_par,amt,bal','No_Inverted_Index'=
>  'txn_dte,dt,txn_bk,txn_br,own_bk ,own_br ,opp_bk ,bus_opr_cde ,opt_prd_cde 
> ,cus_no ,cus_ac ,opp_ac_nme  ,opp_ac ,bv_no  ,aco_ac ,ac_dte ,txn_cnt  
> ,jrn_par  ,mfm_jrn_no ,cbn_jrn_no ,ibs_jrn_no ,vch_no ,vch_seq ,srv_cde 
> ,bus_cd_no  ,id_flg ,bv_cde ,txn_time  ,txn_tlr ,ety_tlr ,ety_bk ,ety_br 
> ,bus_pss_no ,chk_flg ,chk_tlr ,chk_jrn_no , bus_sys_no ,txn_sub_cde 
> ,fin_bus_cde ,fin_bus_sub_cde ,chl  ,tml_id ,sus_no ,sus_seq , cho_seq , 
> itm_itm ,itm_sub ,itm_sss ,dc_flg ,amt,bal,ccy  ,spv_flg ,vch_vld_dte ,pst_bk 
> ,pst_br ,ec_flg ,aco_tlr ,gen_flg ,his_rec_sum_flg ,his_flg ,vch_typ ,val_dte 
> ,opp_ac_flg ,cmb_flg ,ass_vch_flg ,cus_pps_flg ,bus_rmk_cde ,vch_bus_rmk 
> ,tec_rmk_cde ,vch_tec_rmk ,gems_last_upd_d ,maps_date ,maps_job' );
> 2: start a data load.
> LOAD DATA inpath 'hdfs://hacluster/user/test/20140101_1_1.csv' into 
> table flow_carbon_new999 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','header'='false');
> 3: run a insert overwrite job 
> insert into table  flow_carbon_new999 select * from flow_carbon_new666;
> 4: run show segment query:
> show segments for table ajeet.flow_carbon_new999
> 5: Observe that all previous segments are marked for delete
> 6: run clean query
> CLEAN FILES FOR TABLE ajeet.flow_carbon_new999;
> 7: again run show segment query
> 8: Observe that still all previous segments which are marked for delete are 
> shown as result.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1755) Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert overwrite and update job concurrently.

2017-12-11 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1755:
---

Assignee: Kushal

> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> -
>
> Key: CARBONDATA-1755
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1755
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Assignee: Kushal
>Priority: Minor
>  Labels: dfx
>
> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> updated data will be overwritten by insert overwrite job. So there is no 
> meaning of running update job if insert overwrite is in progress.
> Steps:
> 1: Create a table
> 2: Do a data load
> 3: run insert overwrite job.
> 4: run a update job while overwrite job is still running.
> 5: Observe that update job is finished and after that overwrite job is also 
> finished.
> 6: All previous segments are marked for delete and there is no impact of 
> update job. Update job will use the resources unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1755) Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert overwrite and update job concurrently.

2017-12-11 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1755:
---

Assignee: (was: Kushal)

> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> -
>
> Key: CARBONDATA-1755
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1755
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: dfx
>
> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> updated data will be overwritten by insert overwrite job. So there is no 
> meaning of running update job if insert overwrite is in progress.
> Steps:
> 1: Create a table
> 2: Do a data load
> 3: run insert overwrite job.
> 4: run a update job while overwrite job is still running.
> 5: Observe that update job is finished and after that overwrite job is also 
> finished.
> 6: All previous segments are marked for delete and there is no impact of 
> update job. Update job will use the resources unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1717) remove sc broadcast to get hadoop configuration

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal closed CARBONDATA-1717.
---
Resolution: Fixed

> remove sc broadcast to get hadoop configuration
> ---
>
> Key: CARBONDATA-1717
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1717
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1607) Support Column comment for carbon table

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal closed CARBONDATA-1607.
---
Resolution: Duplicate

> Support Column comment for carbon table
> ---
>
> Key: CARBONDATA-1607
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1607
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>
> support column comment for table, so when table is described, we can show the 
> comment for that specific column, if comment is not mentioned, comment 
> default value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1606) Support Column comment for carbon table

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal closed CARBONDATA-1606.
---
Resolution: Duplicate

> Support Column comment for carbon table
> ---
>
> Key: CARBONDATA-1606
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1606
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>
> support column comment for table, so when table is described, we can show the 
> comment for that specific column, if comment is not mentioned, comment 
> default value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1605) Support Column comment for carbon table

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal closed CARBONDATA-1605.
---
Resolution: Duplicate

> Support Column comment for carbon table
> ---
>
> Key: CARBONDATA-1605
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1605
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>
> support column comment for table, so when table is described, we can show the 
> comment for that specific column, if comment is not mentioned, comment 
> default value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1604) Support Column comment for carbon table

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal closed CARBONDATA-1604.
---
Resolution: Duplicate

> Support Column comment for carbon table
> ---
>
> Key: CARBONDATA-1604
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1604
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1791) Carbon1.3.0 Concurrent Load-Alter: user is able to Alter table even if insert/load job is running

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1791:
---

Assignee: Akash R Nilugal

> Carbon1.3.0 Concurrent Load-Alter: user is able to Alter table even if 
> insert/load job is running
> -
>
> Key: CARBONDATA-1791
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1791
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
> Environment:  
> 3 Node ant cluster 
>Reporter: Ajeet Rai
>Assignee: Akash R Nilugal
>  Labels: dfx
> Fix For: 1.3.0
>
>
> Carbon1.3.0 Concurrent Load-Alter: user is able to Alter table even if 
> insert/load job is running.
> Steps: 1:  Create a table 
> 2: Start a insert job
> 3: Concurrently Alter the table(add,drop,rename)
> 4: Observe that alter is success
> 5: Observe that insert job is running and after some times job fails if table 
> is renamed otherwise alter is success(for add,drop column)
> Expected behvaiour: drop job should wait for insert job to complete



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1761) (Carbon1.3.0 - DELETE SEGMENT BY ID) In Progress Segment is marked for delete if respective id is given in delete segment by id query

2017-12-05 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1761:
---

Assignee: Akash R Nilugal

> (Carbon1.3.0 - DELETE SEGMENT BY ID) In Progress Segment is marked for delete 
> if respective id is given in delete segment by id query
> -
>
> Key: CARBONDATA-1761
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1761
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
> Description
>Reporter: Ajeet Rai
>Assignee: Akash R Nilugal
>  Labels: dfx
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> (Carbon1.3.0 - DELETE SEGMENT BY ID) In Progress Segment is marked for delete 
> if respective id is given in delete segment by id query.
> 1: Create a table
> CREATE TABLE IF NOT EXISTS flow_carbon_new999(txn_dte String,dt String,txn_bk 
> String,txn_br String,own_bk String,own_br String,opp_bk String,bus_opr_cde 
> String,opt_prd_cde String,cus_no String,cus_ac String,opp_ac_nme 
> String,opp_ac String,bv_no String,aco_ac String,ac_dte String,txn_cnt 
> int,jrn_par int,mfm_jrn_no String,cbn_jrn_no String,ibs_jrn_no String,vch_no 
> String,vch_seq String,srv_cde String,bus_cd_no String,id_flg String,bv_cde 
> String,txn_time String,txn_tlr String,ety_tlr String,ety_bk String,ety_br 
> String,bus_pss_no String,chk_flg String,chk_tlr String,chk_jrn_no String, 
> bus_sys_no String,txn_sub_cde String,fin_bus_cde String,fin_bus_sub_cde 
> String,chl String,tml_id String,sus_no String,sus_seq String, cho_seq String, 
> itm_itm String,itm_sub String,itm_sss String,dc_flg String,amt 
> decimal(15,2),bal decimal(15,2),ccy String,spv_flg String,vch_vld_dte 
> String,pst_bk String,pst_br String,ec_flg String,aco_tlr String,gen_flg 
> String,his_rec_sum_flg String,his_flg String,vch_typ String,val_dte 
> String,opp_ac_flg String,cmb_flg String,ass_vch_flg String,cus_pps_flg 
> String,bus_rmk_cde String,vch_bus_rmk String,tec_rmk_cde String,vch_tec_rmk 
> String,gems_last_upd_d String,maps_date String,maps_job String)STORED BY 
> 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='txn_cnt,jrn_par,amt,bal','No_Inverted_Index'=
>  'txn_dte,dt,txn_bk,txn_br,own_bk ,own_br ,opp_bk ,bus_opr_cde ,opt_prd_cde 
> ,cus_no ,cus_ac ,opp_ac_nme ,opp_ac ,bv_no ,aco_ac ,ac_dte ,txn_cnt ,jrn_par 
> ,mfm_jrn_no ,cbn_jrn_no ,ibs_jrn_no ,vch_no ,vch_seq ,srv_cde ,bus_cd_no 
> ,id_flg ,bv_cde ,txn_time ,txn_tlr ,ety_tlr ,ety_bk ,ety_br ,bus_pss_no 
> ,chk_flg ,chk_tlr ,chk_jrn_no , bus_sys_no ,txn_sub_cde ,fin_bus_cde 
> ,fin_bus_sub_cde ,chl ,tml_id ,sus_no ,sus_seq , cho_seq , itm_itm ,itm_sub 
> ,itm_sss ,dc_flg ,amt,bal,ccy ,spv_flg ,vch_vld_dte ,pst_bk ,pst_br ,ec_flg 
> ,aco_tlr ,gen_flg ,his_rec_sum_flg ,his_flg ,vch_typ ,val_dte ,opp_ac_flg 
> ,cmb_flg ,ass_vch_flg ,cus_pps_flg ,bus_rmk_cde ,vch_bus_rmk ,tec_rmk_cde 
> ,vch_tec_rmk ,gems_last_upd_d ,maps_date ,maps_job' );
> 2: start a data load.
> LOAD DATA inpath 'hdfs://hacluster/user/test/20140101_1_1.csv' into 
> table flow_carbon_new999 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','header'='false');
> 3: run a insert into/overwrite job
> insert into table flow_carbon_new999 select * from flow_carbon_new666;
> 4: show segments for table flow_carbon_new999;
> 5: Observe that load/insert/overwrite job is started with new segment id
> 6: now run a delete segment by id query with this id.
> DELETE FROM TABLE ajeet.flow_carbon_new999 WHERE SEGMENT.ID IN (34)
> 7: again run show segment and see this segment which is still in progress is 
> marked for delete.
> 8: Observe that insert/load job is still running and after some time(in next 
> job of load/insert/overwrite), this job fails with below error:
> Error: java.lang.RuntimeException: It seems insert overwrite has been issued 
> during load (state=,code=0)
> This is not correct behaviour and it should be handled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-2463) if two insert operations are running concurrently 1 task fails and causes wrong no of records in select

2018-05-09 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2463:

Description: 
If two insert operations are running concurrently then 1 task fails for one of 
the job. However both jobs are successs.

 

Below is the exception:
org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException: 
Error while initializing data handler :

> if two insert operations are running concurrently 1 task fails and causes 
> wrong no of records in select
> ---
>
> Key: CARBONDATA-2463
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2463
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Rahul Kumar
>Priority: Major
>
> If two insert operations are running concurrently then 1 task fails for one 
> of the job. However both jobs are successs.
>  
> Below is the exception:
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  Error while initializing data handler :



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2440) In SDK user can not specified the Unsafe memory , so it should take complete from Heap , and it should not be sorted using unsafe.

2018-05-04 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2440:
---

 Summary: In SDK user can not specified the Unsafe memory , so it 
should take complete from Heap , and it should not be sorted using unsafe.
 Key: CARBONDATA-2440
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2440
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2484) Refactor the datamap code and clear the datamap from executor on table drop

2018-05-15 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2484:
---

 Summary: Refactor the datamap code and clear the datamap from 
executor on table drop
 Key: CARBONDATA-2484
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2484
 Project: CarbonData
  Issue Type: Sub-task
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


During query, blockletDataMapFactory maintains a segmentMap which has mapping of
segmentId -> list of index file, and this will be used while getting the 
extended blocklet
by checking whether the blocklet present in the index or not.
In case of Lucene, the datamap job will be launched and during pruning the 
segmentMap will be added
in executor and this map will be cleared in driver when drop table is called, 
but it will not be cleared in executor.
so when the query is fired after table or datamap is dropped, the lucene query 
fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2520) datamap writers are not getting closed on task failure

2018-05-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2520:
---

 Summary: datamap writers are not getting closed on task failure
 Key: CARBONDATA-2520
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2520
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


*Problem:* The datamap writers registered to listener are closed or finished 
only in case of load success case and not in any failure case. So when tesing 
lucene, it is found that, after task is failed and the writer is not closed, so 
the write.lock file written in the index folder of lucene is still exists, so 
when next task comes to write index in same directory, it fails with the error 
lock file already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2585) Support Adding Local Dictionary configuration in Create table statement

2018-06-19 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2585:

Description: 
Allow user to pass local dictionary configuration in Create table statement.

*LOCAL_DICTIONARY_ENABLE*

{color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}

{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***{color}

CREATE TABLE carbontable(

column1 string,

column2 string,

column3 LONG )

STORED BY 'carbondata'

TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*

'*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')

  was:
Allow user to pass local dictionary configuration in Create table statement.

{color:#00}*ENABLE_LOCAL_DICT*{color}

{color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}

{color:#00}CREATE TABLE carbontable({color}

{color:#00} column1 string,{color}

{color:#00} column2 string,{color}

{color:#00} column3 LONG ){color}

{color:#00} STORED BY 'carbondata'{color}

{color:#00}TBLPROPERTIES('{color}{color:#00}*ENABLE_LOCAL_DICT*{color}{color:#00}'='{color}{color:#00}*true*{color}{color:#00}',{color}{color:#00}*CARBON_LOCALDICT_THRESHOLD=1000'*{color}{color:#00}){color}


> Support Adding Local Dictionary configuration in Create table statement
> ---
>
> Key: CARBONDATA-2585
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2585
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Allow user to pass local dictionary configuration in Create table statement.
> *LOCAL_DICTIONARY_ENABLE*
> {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
> {color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}
> {color:#00}*LOCAL_DICTIONARY_EXCLUDE***{color}
> CREATE TABLE carbontable(
> column1 string,
> column2 string,
> column3 LONG )
> STORED BY 'carbondata'
> TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*
> '*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2586) Support Showing local dictionary configuration in desc formatted command

2018-06-19 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2586:

Description: 
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*
 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
 # {color:#00}*LOCAL_DICTIONARY_INCLUDE*{color}
 #  *LOCAL_DICTIONARY_EXCLUDE*

  was:
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*
 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
 # **{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}
 #  *LOCAL_DICTIONARY_EXCLUDE***


> Support Showing local dictionary configuration in desc formatted command
> 
>
> Key: CARBONDATA-2586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2586
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Support Showing local dictionary parameter in Desc formatted command
>  # *LOCAL_DICTIONARY_ENABLE*
>  # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
>  # {color:#00}*LOCAL_DICTIONARY_INCLUDE*{color}
>  #  *LOCAL_DICTIONARY_EXCLUDE*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2586) Support Showing local dictionary configuration in desc formatted command

2018-06-19 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2586:

Description: 
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*
 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
 # **{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}
 #  *LOCAL_DICTIONARY_EXCLUDE***

  was:
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*

 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}

 # **{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}

 #  

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***{color}


> Support Showing local dictionary configuration in desc formatted command
> 
>
> Key: CARBONDATA-2586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2586
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Support Showing local dictionary parameter in Desc formatted command
>  # *LOCAL_DICTIONARY_ENABLE*
>  # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
>  # **{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}
>  #  *LOCAL_DICTIONARY_EXCLUDE***



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2586) Support Showing local dictionary configuration in desc formatted command

2018-06-19 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2586:

Description: 
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*

 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}

 # **{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}

 #  

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***{color}

  was:
Support Showing local dictionary parameter in Desc formatted command
 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
 # {color:#00}*ENABLE_LOCAL_DICT*{color}


> Support Showing local dictionary configuration in desc formatted command
> 
>
> Key: CARBONDATA-2586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2586
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Support Showing local dictionary parameter in Desc formatted command
>  # *LOCAL_DICTIONARY_ENABLE*
>  # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
>  # **{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}
>  #  
> {color:#00}*LOCAL_DICTIONARY_EXCLUDE***{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2585) Support Adding Local Dictionary configuration in Create table statement

2018-06-19 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2585:

Description: 
Allow user to pass local dictionary configuration in Create table statement.

*LOCAL_DICTIONARY_ENABLE :  enable or disable local dictionary generation for a 
table(default local dictionary generation will be true)*

{color:#00}*CARBON_LOCALDICT_THRESHOLD: configuring the threshold value for 
local dictionary generation(default will be 1000)*{color}

{color:#00}*LOCAL_DICTIONARY_INCLUDE***: list of columns for which user 
wants to generate local dictionary (default all the no dictionary string data 
type columns will be considered for generation) {color}

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***: list of columns for which user 
does not want to generate local dictionary (default no string datatype no 
dictionary columns are excluded unless it is configured) {color}

CREATE TABLE carbontable(

column1 string,

column2 string,

column3 LONG )

STORED BY 'carbondata'

TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*

'*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')

  was:
Allow user to pass local dictionary configuration in Create table statement.

*LOCAL_DICTIONARY_ENABLE*

{color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}

{color:#00}*LOCAL_DICTIONARY_INCLUDE***{color}

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***{color}

CREATE TABLE carbontable(

column1 string,

column2 string,

column3 LONG )

STORED BY 'carbondata'

TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*

'*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')


> Support Adding Local Dictionary configuration in Create table statement
> ---
>
> Key: CARBONDATA-2585
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2585
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Allow user to pass local dictionary configuration in Create table statement.
> *LOCAL_DICTIONARY_ENABLE :  enable or disable local dictionary generation for 
> a table(default local dictionary generation will be true)*
> {color:#00}*CARBON_LOCALDICT_THRESHOLD: configuring the threshold value 
> for local dictionary generation(default will be 1000)*{color}
> {color:#00}*LOCAL_DICTIONARY_INCLUDE***: list of columns for which user 
> wants to generate local dictionary (default all the no dictionary string data 
> type columns will be considered for generation) {color}
> {color:#00}*LOCAL_DICTIONARY_EXCLUDE***: list of columns for which user 
> does not want to generate local dictionary (default no string datatype no 
> dictionary columns are excluded unless it is configured) {color}
> CREATE TABLE carbontable(
> column1 string,
> column2 string,
> column3 LONG )
> STORED BY 'carbondata'
> TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*
> '*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (CARBONDATA-2586) Support Showing local dictionary configuration in desc formatted command

2018-06-06 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-2586:
---

Assignee: Akash R Nilugal

> Support Showing local dictionary configuration in desc formatted command
> 
>
> Key: CARBONDATA-2586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2586
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Support Showing local dictionary parameter in Desc formatted command
>  # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
>  # {color:#00}*ENABLE_LOCAL_DICT*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (CARBONDATA-2585) Support Adding Local Dictionary configuration in Create table statement

2018-06-06 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-2585:
---

Assignee: Akash R Nilugal

> Support Adding Local Dictionary configuration in Create table statement
> ---
>
> Key: CARBONDATA-2585
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2585
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Allow user to pass local dictionary configuration in Create table statement.
> {color:#00}*ENABLE_LOCAL_DICT*{color}
> {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
> {color:#00}CREATE TABLE carbontable({color}
> {color:#00} column1 string,{color}
> {color:#00} column2 string,{color}
> {color:#00} column3 LONG ){color}
> {color:#00} STORED BY 'carbondata'{color}
> {color:#00}TBLPROPERTIES('{color}{color:#00}*ENABLE_LOCAL_DICT*{color}{color:#00}'='{color}{color:#00}*true*{color}{color:#00}',{color}{color:#00}*CARBON_LOCALDICT_THRESHOLD=1000'*{color}{color:#00}){color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CARBONDATA-2586) Support Showing local dictionary configuration in desc formatted command

2018-06-20 Thread Akash R Nilugal (JIRA)


[ 
https://issues.apache.org/jira/browse/CARBONDATA-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517902#comment-16517902
 ] 

Akash R Nilugal commented on CARBONDATA-2586:
-

 [GitHub Pull Request #2375|https://github.com/apache/carbondata/pull/2375]

> Support Showing local dictionary configuration in desc formatted command
> 
>
> Key: CARBONDATA-2586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2586
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Support Showing local dictionary parameter in Desc formatted command
>  # *LOCAL_DICTIONARY_ENABLE*
>  # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
>  # {color:#00}*LOCAL_DICTIONARY_INCLUDE*{color}
>  #  *LOCAL_DICTIONARY_EXCLUDE*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2586) Support Showing local dictionary configuration in desc formatted command

2018-06-21 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2586:

Description: 
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*
 # *LOCAL_DICTIONARY_THRESHOLD*

 # {color:#00}*LOCAL_DICTIONARY_INCLUDE*{color}
 #  *LOCAL_DICTIONARY_EXCLUDE*

  was:
Support Showing local dictionary parameter in Desc formatted command
 # *LOCAL_DICTIONARY_ENABLE*
 # {color:#00}*CARBON_LOCALDICT_THRESHOLD*{color}
 # {color:#00}*LOCAL_DICTIONARY_INCLUDE*{color}
 #  *LOCAL_DICTIONARY_EXCLUDE*


> Support Showing local dictionary configuration in desc formatted command
> 
>
> Key: CARBONDATA-2586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2586
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>
> Support Showing local dictionary parameter in Desc formatted command
>  # *LOCAL_DICTIONARY_ENABLE*
>  # *LOCAL_DICTIONARY_THRESHOLD*
>  # {color:#00}*LOCAL_DICTIONARY_INCLUDE*{color}
>  #  *LOCAL_DICTIONARY_EXCLUDE*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2585) Support Adding Local Dictionary configuration in Create table statement

2018-06-21 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2585:

Description: 
Allow user to pass local dictionary configuration in Create table statement.

*LOCAL_DICTIONARY_ENABLE :*  enable or disable local dictionary generation for 
a table(default local dictionary generation will be true)

*LOCAL_DICTIONARY_THRESHOLD*{color:#00}*:* configuring the threshold value 
for local dictionary generation(default will be 1000){color}

{color:#00}*LOCAL_DICTIONARY_INCLUDE***: list of columns for which user 
wants to generate local dictionary (default all the no dictionary string data 
type columns will be considered for generation) {color}

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***: list of columns for which user 
does not want to generate local dictionary (default no string datatype no 
dictionary columns are excluded unless it is configured) {color}

CREATE TABLE carbontable(

column1 string,

column2 string,

column3 LONG )

STORED BY 'carbondata'

TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*

'*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')

  was:
Allow user to pass local dictionary configuration in Create table statement.

*LOCAL_DICTIONARY_ENABLE :  enable or disable local dictionary generation for a 
table(default local dictionary generation will be true)*

{color:#00}*CARBON_LOCALDICT_THRESHOLD: configuring the threshold value for 
local dictionary generation(default will be 1000)*{color}

{color:#00}*LOCAL_DICTIONARY_INCLUDE***: list of columns for which user 
wants to generate local dictionary (default all the no dictionary string data 
type columns will be considered for generation) {color}

{color:#00}*LOCAL_DICTIONARY_EXCLUDE***: list of columns for which user 
does not want to generate local dictionary (default no string datatype no 
dictionary columns are excluded unless it is configured) {color}

CREATE TABLE carbontable(

column1 string,

column2 string,

column3 LONG )

STORED BY 'carbondata'

TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*

'*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')


> Support Adding Local Dictionary configuration in Create table statement
> ---
>
> Key: CARBONDATA-2585
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2585
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: kumar vishal
>Assignee: Akash R Nilugal
>Priority: Major
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Allow user to pass local dictionary configuration in Create table statement.
> *LOCAL_DICTIONARY_ENABLE :*  enable or disable local dictionary generation 
> for a table(default local dictionary generation will be true)
> *LOCAL_DICTIONARY_THRESHOLD*{color:#00}*:* configuring the threshold 
> value for local dictionary generation(default will be 1000){color}
> {color:#00}*LOCAL_DICTIONARY_INCLUDE***: list of columns for which user 
> wants to generate local dictionary (default all the no dictionary string data 
> type columns will be considered for generation) {color}
> {color:#00}*LOCAL_DICTIONARY_EXCLUDE***: list of columns for which user 
> does not want to generate local dictionary (default no string datatype no 
> dictionary columns are excluded unless it is configured) {color}
> CREATE TABLE carbontable(
> column1 string,
> column2 string,
> column3 LONG )
> STORED BY 'carbondata'
> TBLPROPERTIES('*LOCAL_DICTIONARY_ENABLE*'='*true*',*’LOCAL_DICTIONARY_THRESHOLD=1000',*
> '*LOCAL_DICTIONARY_INCLUDE*'='*column1*','*LOCAL_DICTIONARY_EXCLUDE*'='*column2*')



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2431) Incremental data added after table creation is not reflecting while doing select query.

2018-05-03 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2431:
---

 Summary: Incremental data added after table creation is not 
reflecting while doing select query.
 Key: CARBONDATA-2431
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2431
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Rahul Kumar


steps to reproduce :

 

1: Write a carbon data file which is having 10 records using SDK and upload to 
HDFS. Create an external table with this location 
 2: execute the select query and observe that 10 records are written.
 3: Again write a new file in same folder with 10 records and move it to HDFS 
in same folder.
 4: Now again execute the select query and observe that only 10 records are 
returned instaed of 20.
 5: Create a new external table on same location and observe that all 20 
records are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2419) sortColumns Order we are getting wrong as we set for external table is fixed

2018-04-30 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2419:
---

 Summary: sortColumns Order we are getting wrong as we set for 
external table is fixed
 Key: CARBONDATA-2419
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2419
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-1626) add datasize and index size to table status file

2017-10-26 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1626:
---

 Summary: add datasize and index size to table status file
 Key: CARBONDATA-1626
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1626
 Project: CarbonData
  Issue Type: Improvement
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


if carbondata is used in cloud which will have charging or billing for the 
queries ran, adding datasize and indexsize in table status will help in billing 
features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1605) Support Column comment for carbon table

2017-10-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1605:
---

 Summary: Support Column comment for carbon table
 Key: CARBONDATA-1605
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1605
 Project: CarbonData
  Issue Type: New Feature
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


support column comment for table, so when table is described, we can show the 
comment for that specific column, if comment is not mentioned, comment default 
value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1604) Support Column comment for carbon table

2017-10-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1604:
---

 Summary: Support Column comment for carbon table
 Key: CARBONDATA-1604
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1604
 Project: CarbonData
  Issue Type: New Feature
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1607) Support Column comment for carbon table

2017-10-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1607:
---

 Summary: Support Column comment for carbon table
 Key: CARBONDATA-1607
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1607
 Project: CarbonData
  Issue Type: New Feature
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


support column comment for table, so when table is described, we can show the 
comment for that specific column, if comment is not mentioned, comment default 
value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1606) Support Column comment for carbon table

2017-10-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1606:
---

 Summary: Support Column comment for carbon table
 Key: CARBONDATA-1606
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1606
 Project: CarbonData
  Issue Type: New Feature
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


support column comment for table, so when table is described, we can show the 
comment for that specific column, if comment is not mentioned, comment default 
value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1608) Support Column comment for carbon table

2017-10-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1608:
---

 Summary: Support Column comment for carbon table
 Key: CARBONDATA-1608
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1608
 Project: CarbonData
  Issue Type: New Feature
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


support column comment for table, so when table is described, we can show the 
comment for that specific column, if comment is not mentioned, comment default 
value is null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1758) Carbon1.3.0- No Inverted Index : Select column with is null for no_inverted_index column throws java.lang.ArrayIndexOutOfBoundsException

2018-01-10 Thread Akash R Nilugal (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321705#comment-16321705
 ] 

Akash R Nilugal commented on CARBONDATA-1758:
-

i have also executed, queries are working fine

> Carbon1.3.0- No Inverted Index : Select column with is null for 
> no_inverted_index column throws java.lang.ArrayIndexOutOfBoundsException
> 
>
> Key: CARBONDATA-1758
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1758
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: 3 node cluster
>Reporter: Chetan Bhat
>  Labels: Functional
>
> Steps :
> In Beeline user executes the queries in sequence.
> CREATE TABLE uniqdata_DI_int (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='cust_id','NO_INVERTED_INDEX'='cust_id');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/3000_UniqData.csv' into table 
> uniqdata_DI_int OPTIONS('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> Select count(CUST_ID) from uniqdata_DI_int;
> Select count(CUST_ID)*10 as multiple from uniqdata_DI_int;
> Select avg(CUST_ID) as average from uniqdata_DI_int;
> Select floor(CUST_ID) as average from uniqdata_DI_int;
> Select ceil(CUST_ID) as average from uniqdata_DI_int;
> Select ceiling(CUST_ID) as average from uniqdata_DI_int;
> Select CUST_ID*integer_column1 as multiple from uniqdata_DI_int;
> Select CUST_ID from uniqdata_DI_int where CUST_ID is null;
> *Issue : Select column with is null for no_inverted_index column throws 
> java.lang.ArrayIndexOutOfBoundsException*
> 0: jdbc:hive2://10.18.98.34:23040> Select CUST_ID from uniqdata_DI_int where 
> CUST_ID is null;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 79.0 failed 4 times, most recent failure: Lost task 0.3 in 
> stage 79.0 (TID 123, BLR114278, executor 18): 
> org.apache.spark.util.TaskCompletionListenerException: 
> java.util.concurrent.ExecutionException: 
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
> at org.apache.spark.scheduler.Task.run(Task.scala:112)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace: (state=,code=0)
> Expected : Select column with is null for no_inverted_index column should be 
> successful displaying the correct result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-11 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2021:
---

 Summary: when delete is success and update is failed while writing 
status file  then a stale carbon data file is created.
 Key: CARBONDATA-2021
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


when delete is success and update is failed while writing status file  then a 
stale carbon data file is created.
so removing that file on clean up . and also not considering that one during 
query.


when the update operation is running and the user stops it abruptly,
then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
and in query also new data file should be excluded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-2014) update table status for load failure only after first entry

2018-01-10 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-2014:
---

Assignee: Akash R Nilugal

> update table status for load failure only after first entry
> ---
>
> Key: CARBONDATA-2014
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2014
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> update table status for load failure only after first entry and before 
> calling to update the table status for failure, check whether it is hive 
> partition table in the same way as it is checked while updating in progress 
> status to table status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1735) Carbon1.3.0 Load: Segment created during load is not marked for delete if beeline session is closed while load is still in progress

2018-01-09 Thread Akash R Nilugal (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16318170#comment-16318170
 ] 

Akash R Nilugal commented on CARBONDATA-1735:
-

you can verify again and make it resolved

> Carbon1.3.0 Load: Segment created during load is not marked for delete if 
> beeline session is closed  while load is still in progress
> 
>
> Key: CARBONDATA-1735
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1735
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster 
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: DFX
>
> Load: Segment created during load is not marked for delete if beeline session 
> is closed  while load is still in progress.
> Steps: 
> 1: Create a table with dictionary include
> 2: Start a load job
> 3: close the beeline session when global dictionary generation job is still 
> in progress.
> 4: Observe that global dictionary generation job is completed but next job is 
> not triggered.
> 5:  Also observe that table status file is not updated and status of job is 
> still in progress.
> 6: show segment  will show this segment with status as in progress.
> Expected behaviour: Either job should be completed or load should fail and 
> segment should be marked for delete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1735) Carbon1.3.0 Load: Segment created during load is not marked for delete if beeline session is closed while load is still in progress

2018-01-09 Thread Akash R Nilugal (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16318168#comment-16318168
 ] 

Akash R Nilugal commented on CARBONDATA-1735:
-

this issue is fixed with CARBONDATA-1789, CARBONDATA-1791 , this fix is linked 
to pr [https://github.com/apache/carbondata/pull/1610]

> Carbon1.3.0 Load: Segment created during load is not marked for delete if 
> beeline session is closed  while load is still in progress
> 
>
> Key: CARBONDATA-1735
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1735
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster 
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: DFX
>
> Load: Segment created during load is not marked for delete if beeline session 
> is closed  while load is still in progress.
> Steps: 
> 1: Create a table with dictionary include
> 2: Start a load job
> 3: close the beeline session when global dictionary generation job is still 
> in progress.
> 4: Observe that global dictionary generation job is completed but next job is 
> not triggered.
> 5:  Also observe that table status file is not updated and status of job is 
> still in progress.
> 6: show segment  will show this segment with status as in progress.
> Expected behaviour: Either job should be completed or load should fail and 
> segment should be marked for delete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-18 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)stored by 'carbondata';

LOAD DATA INPATH 'hdfs://hacluster/chetan/split3.csv' into table 
uniqdata_carbon1 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

create table uniqdata_carbon stored by 'carbondata' location 
'/opt/external_location' as select * from uniqdata_carbon1;
  create table uniqdata_carbon stored by 'carbondata' 
tblproperties('sort_columns'='CUST_ID') as select * from uniqdata_carbon1;

 

199221,CUST_NAME_190221,ACTIVE_EMUI_VERSION_190221,2010-10-04 
02:57:17,2012-10-04 
03:56:07,12337200,-2.2337200E+11,12345705900,22345705900,11234567490,-11234567490,27000

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)stored by 'carbondata';

LOAD DATA INPATH 'hdfs://hacluster/chetan/split3.csv' into table 
uniqdata_carbon1 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

create table uniqdata_carbon stored by 'carbondata' location 
'/opt/external_location' as select * from uniqdata_carbon1;
 create table uniqdata_carbon stored by 'carbondata' 
tblproperties('sort_columns'='CUST_ID') as select * from uniqdata_carbon1;

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
>   CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int)stored by 'carbondata';
> LOAD DATA INPATH 'hdfs://hacluster/chetan/split3.csv' into table 
> uniqdata_carbon1 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> create table uniqdata_carbon stored by 'carbondata' location 
> 

[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-18 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)stored by 'carbondata';

LOAD DATA INPATH 'hdfs://hacluster/chetan/split3.csv' into table 
uniqdata_carbon1 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

create table uniqdata_carbon stored by 'carbondata' location 
'/opt/external_location' as select * from uniqdata_carbon1;
 create table uniqdata_carbon stored by 'carbondata' 
tblproperties('sort_columns'='CUST_ID') as select * from uniqdata_carbon1;

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

 

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
>   CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int)stored by 'carbondata';
> LOAD DATA INPATH 'hdfs://hacluster/chetan/split3.csv' into table 
> uniqdata_carbon1 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> create table uniqdata_carbon stored by 'carbondata' location 
> '/opt/external_location' as select * from uniqdata_carbon1;
>  create table uniqdata_carbon stored by 'carbondata' 
> tblproperties('sort_columns'='CUST_ID') as select * from uniqdata_carbon1;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-18 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)stored by 'carbondata';

LOAD DATA INPATH 'hdfs://hacluster/chetan/split3.csv' into table 
uniqdata_carbon1 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

create table uniqdata_carbon stored by 'carbondata' location 
'/opt/external_location' as select * from uniqdata_carbon1;
  create table uniqdata_carbon stored by 'carbondata' 
tblproperties('sort_columns'='CUST_ID') as select * from uniqdata_carbon1;

 

199221,CUST_NAME_190221,ACTIVE_EMUI_VERSION_190221,2010-10-04 
02:57:17,2012-10-04 
03:56:07,12337200,-2.2337200E+11,12345705900,22345705900,11234567490,-11234567490,27000

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2031) Select column with is null for no_inverted_index column throws java.lang.ArrayIndexOutOfBoundsException

2018-01-15 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2031:
---

 Summary: Select column with is null for no_inverted_index column 
throws java.lang.ArrayIndexOutOfBoundsException
 Key: CARBONDATA-2031
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2031
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
 Attachments: dest.csv

steps:

{color:#33}1) create table zerorows_part (c1 string,c2 int,c3 string,c5 
string) STORED BY 'carbondata' 
TBLPROPERTIES('DICTIONARY_INCLUDE'='C2','NO_INVERTED_INDEX'='C2'){color}

{color:#33}2){color}{color:#33}LOAD DATA LOCAL INPATH 
'$filepath/dest.csv' INTO table zerorows_part 
OPTIONS('delimiter'=',','fileheader'='c1,c2,c3,c5'){color}

{color:#33}3){color}{color:#33}select c2 from zerorows_part where c2 is 
null{color}

 

*Previous exception in task: java.util.concurrent.ExecutionException: 
java.lang.ArrayIndexOutOfBoundsException: 0*
    
*org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.updateScanner(AbstractDataBlockIterator.java:136)*
    
*org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.processNextBatch(DataBlockIteratorImpl.java:64)*
    
*org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:46)*
    
*org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:283)*
    
*org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:171)*
    
*org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:370)*
    
*org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown
 Source)*
    
*org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
 Source)*
    
*org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)*
    
*org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)*
    
*org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)*
    
*org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)*
    
*org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)*
    
*org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)*
    *org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)*
    *org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
    *org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
    *org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)*
    *org.apache.spark.scheduler.Task.run(Task.scala:108)*
    *org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)*
    
*java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*
    
*java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*
    *java.lang.Thread.run(Thread.java:748)*
    *at 
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)*
    *at 
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)*
    *at org.apache.spark.scheduler.Task.run(Task.scala:118)*
    *at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)*
    *at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*
    *at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*
    *at java.lang.Thread.run(Thread.java:748)*

 

 

{color:#33}[^dest.csv]{color}

{color:#33} {color}

 

 

 


**



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

test({color:#008000}"overwrite whole partition table with empty data"{color}) {
 sql({color:#008000}"create table partitionLoadTable(name string, age int) 
PARTITIONED BY(address string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abc',4,'def'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abd',5,'xyz'"{color})
 sql({color:#008000}"create table noLoadTable (name string, age int, address 
string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert overwrite table partitionLoadTable select * from 
noLoadTable"{color})
 checkAnswer(sql({color:#008000}"select * from partitionLoadTable"{color}), 
sql({color:#008000}"select * from noLoadTable"{color}))
}


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

test({color:#008000}"overwrite whole partition table with empty data"{color}) {
 sql({color:#008000}"create table partitionLoadTable(name string, age int) 
PARTITIONED BY(address string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abc',4,'def'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abd',5,'xyz'"{color})
 sql({color:#008000}"create table noLoadTable (name string, age int, address 
string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert overwrite table partitionLoadTable select * from 
noLoadTable"{color})
 checkAnswer(sql({color:#008000}"select * from partitionLoadTable"{color}), 
sql({color:#008000}"select * from noLoadTable"{color}))
}

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
> test({color:#008000}"overwrite whole partition table with empty data"{color}) 
> {
>  sql({color:#008000}"create table partitionLoadTable(name string, age int) 
> PARTITIONED BY(address string) stored by 'carbondata'"{color})
>  sql({color:#008000}"insert into partitionLoadTable select 
> 'abc',4,'def'"{color})
>  sql({color:#008000}"insert into partitionLoadTable select 
> 'abd',5,'xyz'"{color})
>  sql({color:#008000}"create table noLoadTable (name string, age int, address 
> string) stored by 'carbondata'"{color})
>  sql({color:#008000}"insert overwrite table partitionLoadTable select * from 
> noLoadTable"{color})
>  checkAnswer(sql({color:#008000}"select * from partitionLoadTable"{color}), 
> sql({color:#008000}"select * from noLoadTable"{color}))
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

CREATE TABLE uniqdata_string(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ 
timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 
decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, 
Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION 
string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
('TABLE_BLOCKSIZE'= '256 MB');

 

 LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
uniqdata_string partition(active_emui_version='abc') 
OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');

 

CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

 

insert overwrite table uniqdata_string partition(active_emui_version='xxx') 
select CUST_ID, CUST_NAME,DOB,doj, bigint_column1, bigint_column2, 
decimal_column1, decimal_column2,double_column1, double_column2,integer_column1 
from uniqdata_hive limit 10;

 9000,CUST_NAME_0,ACTIVE_EMUI_VERSION_0,1970-01-01 01:00:03,1970-01-01 
02:00:03,123372036854,-223372036854,12345678901.123400,22345678901.123400,11234567489.797600,-11234567489.797600,1


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

CREATE TABLE uniqdata_string(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ 
timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 
decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, 
Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION 
string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
('TABLE_BLOCKSIZE'= '256 MB');

 

 LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
uniqdata_string partition(active_emui_version='abc') 
OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');

 

CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

 

insert overwrite table uniqdata_string partition(active_emui_version='xxx') 
select CUST_ID, CUST_NAME,DOB,doj, bigint_column1, bigint_column2, 
decimal_column1, decimal_column2,double_column1, double_column2,integer_column1 
from uniqdata_hive limit 10;

 9000,CUST_NAME_0,ACTIVE_EMUI_VERSION_0,1970-01-01 01:00:03,1970-01-01 
02:00:03,123372036854,-223372036854,12345678901.123400,22345678901.123400,11234567489.797600,-11234567489.797600,1

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
> CREATE TABLE uniqdata_string(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ 
> timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 
> decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, 
> Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION 
> string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ('TABLE_BLOCKSIZE'= '256 MB');
>  
>  LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_string partition(active_emui_version='abc') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
>  
> CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
>  
> insert overwrite table uniqdata_string 

[jira] [Created] (CARBONDATA-2060) Fix InsertOverwrite on partition table

2018-01-19 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2060:
---

 Summary: Fix InsertOverwrite on partition table
 Key: CARBONDATA-2060
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2060
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


{color:#33}when partition table overwrite with empty table , it is not 
overwriting the partition table , and when insert overwrite is done on dynamic 
partition table , overwrite was not happening.{color}

 

{color:#33}sql("create table partitionLoadTable(name string, age int) 
PARTITIONED BY(address string) stored by 'carbondata'"){color}
{color:#33}sql("insert into partitionLoadTable select 'abc',4,'def'"){color}
{color:#33}sql("insert into partitionLoadTable select 'abd',5,'xyz'"){color}
{color:#33}sql("create table noLoadTable (name string, age int, address 
string) stored by 'carbondata'"){color}
{color:#33}sql("insert overwrite table partitionLoadTable select * from 
noLoadTable"){color}

{color:#33}when we do select * after insert overwrite operation, ideally it 
should give empty data, but it is giving all data.{color}

 

{color:#33}sql("CREATE TABLE uniqdata_hive_static (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','"){color}
{color:#33}sql("CREATE TABLE uniqdata_string_static(CUST_ID int,CUST_NAME 
String,DOB timestamp,DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10),DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) PARTITIONED BY(ACTIVE_EMUI_VERSION string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 
MB')"){color}
{color:#33}sql(s"LOAD DATA INPATH '$resourcesPath/partData.csv' into table 
uniqdata_string_static OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME 
,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE')"){color}
{color:#33}sql(s"LOAD DATA INPATH '$resourcesPath/partData.csv' into table 
uniqdata_string_static OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME 
,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE')"){color}

{color:#33}sql("insert overwrite table uniqdata_string_static select 
CUST_ID, CUST_NAME,DOB,doj, bigint_column1, bigint_column2, decimal_column1, 
decimal_column2,double_column1, 
double_column2,integer_column1,active_emui_version from uniqdata_hive_static 
limit 10"){color}

 

{color:#33}after this, select * was giving result, ideally it should give 
empty result.{color}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2070) when hive metastore is enabled, create preaggregate table on decimal column of main table is failing

2018-01-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2070:
---

 Summary: when hive metastore is enabled, create preaggregate table 
on decimal column of main table is failing
 Key: CARBONDATA-2070
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2070
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


{color:#33}steps:{color}

{color:#33}Enable hive metastore and run the following queries{color}

{color:#33}1){color}

{color:#33}CREATE TABLE uniqdata(CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string,DOB timestamp,DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10),DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format'{color}

 

{color:#33}2){color}

{color:#33}insert into uniqdata select 
9000,'CUST_NAME_0','ACTIVE_EMUI_VERSION_0','1970-01-01 
01:00:03','1970-01-01 
02:00:03',123372036854,-223372036854,12345678901.123400,22345678901.123400,11234567489.797600,-11234567489.797600,1{color}

 

{color:#33}3){color}

{color:#33}create datamap uniqdata_agg on table uniqdata using 
'preaggregate' as select min(DECIMAL_COLUMN1) from uniqdata group by 
DECIMAL_COLUMN1{color}

 

{color:#33}java.lang.ClassCastException: 
org.apache.carbondata.core.metadata.datatype.DataType cannot be cast to 
org.apache.carbondata.core.metadata.datatype.DecimalType
    at 
org.apache.carbondata.core.metadata.schema.table.column.ColumnSchema.write(ColumnSchema.java:478)
    at 
org.apache.carbondata.core.metadata.schema.table.TableSchema.write(TableSchema.java:215)
    at 
org.apache.carbondata.core.metadata.schema.table.DataMapSchema.write(DataMapSchema.java:99)
    at 
org.apache.carbondata.core.metadata.schema.table.TableInfo.write(TableInfo.java:245)
    at 
org.apache.carbondata.core.metadata.schema.table.TableInfo.serialize(TableInfo.java:304)
    at 
org.apache.spark.sql.CarbonDatasourceHadoopRelation.buildScan(CarbonDatasourceHadoopRelation.scala:83)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy$$anonfun$1.apply(CarbonLateDecodeStrategy.scala:63)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy$$anonfun$1.apply(CarbonLateDecodeStrategy.scala:63)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy$$anonfun$pruneFilterProject$1.apply(CarbonLateDecodeStrategy.scala:178)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy$$anonfun$pruneFilterProject$1.apply(CarbonLateDecodeStrategy.scala:177)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy.getDataSourceScan(CarbonLateDecodeStrategy.scala:366)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy.pruneFilterProjectRaw(CarbonLateDecodeStrategy.scala:299)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy.pruneFilterProject(CarbonLateDecodeStrategy.scala:172)
    at 
org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy.apply(CarbonLateDecodeStrategy.scala:59)
    at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
    at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
    at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
    at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
    at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
    at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
    at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157){color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CARBONDATA-1727) Dataload is successful even in case if the table is droped from other client.

2018-01-16 Thread Akash R Nilugal (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328287#comment-16328287
 ] 

Akash R Nilugal commented on CARBONDATA-1727:
-

now this scenario doesnot happen as drop is not allowed during load

> Dataload is successful even in case if the table is droped from other client.
> -
>
> Key: CARBONDATA-1727
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1727
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load, spark-integration
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Mohammad Shahid Khan
>Assignee: Mohammad Shahid Khan
>Priority: Minor
>
> table drop has highest priority so even if on some table load operation is in 
> progress, 
> the table could be droped.
> If before finishing the load operation the table is dropped then the load 
> should fail.
> Steps:
> 1. Create table t1
> 2. Load data into t1 (big data that takes some time)
> 3. when load in progress, then drop the table
> Actual Result : The load is successful
> Expected Result : Final Load status should be fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2014) update table status for load failure only after first entry

2018-01-09 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2014:
---

 Summary: update table status for load failure only after first 
entry
 Key: CARBONDATA-2014
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2014
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Priority: Minor


update table status for load failure only after first entry and before calling 
to update the table status for failure, check whether it is hive partition 
table in the same way as it is checked while updating in progress status to 
table status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-2113) Count(*) and select * are not working on old store with V2 format

2018-02-01 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2113:

Description: 
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 

  was:
Count(*) and select * are not working on old store with V2 format

 

 


> Count(*) and select * are not working on old store with V2 format
> -
>
> Key: CARBONDATA-2113
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2113
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0
>
>
> Count(*) and select * are not working on old store of V2 format in 1.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2113) Count(*) and select * are not working on old store with V2 format

2018-02-01 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2113:

Description: 
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 1) count * is giving zero, for old store after refresh

0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
+-+--+
| Result |
+-+--+
+-+--+
No rows selected (3.419 seconds)
0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
+---+--+
| count(1) |
+---+--+
| 0 |
+---+--+



 

 

  was:
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 


> Count(*) and select * are not working on old store with V2 format
> -
>
> Key: CARBONDATA-2113
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2113
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Count(*) and select * are not working on old store of V2 format in 1.3.0
>  
>  1) count * is giving zero, for old store after refresh
> 0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (3.419 seconds)
> 0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
> +---+--+
> | count(1) |
> +---+--+
> | 0 |
> +---+--+
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2113) Count(*) and select * are not working on old store with V2 format

2018-02-01 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2113:
---

 Summary: Count(*) and select * are not working on old store with 
V2 format
 Key: CARBONDATA-2113
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2113
 Project: CarbonData
  Issue Type: Bug
  Components: core
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
 Fix For: 1.3.0


Count(*) and select * are not working on old store with V2 format

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2113) Count(*) and select * are not working on old store with V2 format

2018-02-01 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2113:

Description: 
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 1) count * is giving zero, for old store after refresh

0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
 +--+-+
|Result|

+--+-+
 +--+-+
 No rows selected (3.419 seconds)
 0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
 ++-+
|count(1)|

++-+
|0|

++-+

 

CREATE TABLE IF NOT EXISTS employee_c ( eid int, name String, salary String, 
destination String, doj date, toj timestamp, abc decimal(8,2), test BIGINT ) 
stored by 'crbondata';
insert into employee_c select 1,'rev','1000','ban','1999-01-12','1999-02-10 
05:25:00',123456.25,564789;
insert into employee_c select 2,'rev','1000','ban','2012-01-01','2012-02-10 
05:25:00',123456.25,564789;
create table carbon_stream stored by 'carbondata' 
TBLPROPERTIES('DICTIONARY_INCLUDE'='eid','DICTIONARY_EXCLUDE'='name','No_Inverted_Index'='name','SORT_SCOPE'='globalsort',
 'streaming'='true') as select eid , name ,salary, destination, doj, toj, abc, 
test from employee_c;

 

  was:
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 1) count * is giving zero, for old store after refresh

0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
+-+--+
| Result |
+-+--+
+-+--+
No rows selected (3.419 seconds)
0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
+---+--+
| count(1) |
+---+--+
| 0 |
+---+--+



 

 


> Count(*) and select * are not working on old store with V2 format
> -
>
> Key: CARBONDATA-2113
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2113
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Count(*) and select * are not working on old store of V2 format in 1.3.0
>  
>  1) count * is giving zero, for old store after refresh
> 0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
>  +--+-+
> |Result|
> +--+-+
>  +--+-+
>  No rows selected (3.419 seconds)
>  0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
>  ++-+
> |count(1)|
> ++-+
> |0|
> ++-+
>  
> CREATE TABLE IF NOT EXISTS employee_c ( eid int, name String, salary String, 
> destination String, doj date, toj timestamp, abc decimal(8,2), test BIGINT ) 
> stored by 'crbondata';
> insert into employee_c select 1,'rev','1000','ban','1999-01-12','1999-02-10 
> 05:25:00',123456.25,564789;
> insert into employee_c select 2,'rev','1000','ban','2012-01-01','2012-02-10 
> 05:25:00',123456.25,564789;
> create table carbon_stream stored by 'carbondata' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='eid','DICTIONARY_EXCLUDE'='name','No_Inverted_Index'='name','SORT_SCOPE'='globalsort',
>  'streaming'='true') as select eid , name ,salary, destination, doj, toj, 
> abc, test from employee_c;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2113) Count(*) and select * are not working on old store with V2 format

2018-02-01 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2113:

Description: 
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 1) count * is giving zero, for old store after refresh

0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
 +---++
|Result|

+---++
 +---++
 No rows selected (3.419 seconds)
 0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
 +-++
|count(1)|

+-++
|0|

+-++

 

  was:
Count(*) and select * are not working on old store of V2 format in 1.3.0

 

 1) count * is giving zero, for old store after refresh

0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
 +--+-+
|Result|

+--+-+
 +--+-+
 No rows selected (3.419 seconds)
 0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
 ++-+
|count(1)|

++-+
|0|

++-+

 

CREATE TABLE IF NOT EXISTS employee_c ( eid int, name String, salary String, 
destination String, doj date, toj timestamp, abc decimal(8,2), test BIGINT ) 
stored by 'crbondata';
insert into employee_c select 1,'rev','1000','ban','1999-01-12','1999-02-10 
05:25:00',123456.25,564789;
insert into employee_c select 2,'rev','1000','ban','2012-01-01','2012-02-10 
05:25:00',123456.25,564789;
create table carbon_stream stored by 'carbondata' 
TBLPROPERTIES('DICTIONARY_INCLUDE'='eid','DICTIONARY_EXCLUDE'='name','No_Inverted_Index'='name','SORT_SCOPE'='globalsort',
 'streaming'='true') as select eid , name ,salary, destination, doj, toj, abc, 
test from employee_c;

 


> Count(*) and select * are not working on old store with V2 format
> -
>
> Key: CARBONDATA-2113
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2113
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Count(*) and select * are not working on old store of V2 format in 1.3.0
>  
>  1) count * is giving zero, for old store after refresh
> 0: jdbc:hive2://X.X.X.X:22550/default> refresh table brinjal5;
>  +---++
> |Result|
> +---++
>  +---++
>  No rows selected (3.419 seconds)
>  0: jdbc:hive2://X.X.X.X:22550/default> select count(*) from brinjal5;
>  +-++
> |count(1)|
> +-++
> |0|
> +-++
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2182) add one more param called ExtraParmas in SessionParams for session Level operations

2018-02-13 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2182:
---

 Summary: add one more param called ExtraParmas in SessionParams 
for session Level operations
 Key: CARBONDATA-2182
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2182
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


add one more param called ExtraParmas in SessionParams for session Level 
operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2183) fix compaction when segment is delete during compaction and remove unnecessary parameters in functions

2018-02-14 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2183:

Description: when compaction is started and job is running, and parallelly 
the segment involved in the compaction is deleted using DeleteSegmentByID, then 
compaction should be aborted and failed.  (was: when compaction is started and 
job is running, and parallelly the segment involved in the compaction is 
deleted using DeleteSegmentByID, trhen compaction should be aborted and failed.)

> fix compaction when segment is delete during compaction and remove 
> unnecessary parameters in functions
> --
>
> Key: CARBONDATA-2183
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2183
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>
> when compaction is started and job is running, and parallelly the segment 
> involved in the compaction is deleted using DeleteSegmentByID, then 
> compaction should be aborted and failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2183) fix compaction when segment is delete during compaction and remove unnecessary parameters in functions

2018-02-14 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2183:
---

 Summary: fix compaction when segment is delete during compaction 
and remove unnecessary parameters in functions
 Key: CARBONDATA-2183
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2183
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


when compaction is started and job is running, and parallelly the segment 
involved in the compaction is deleted using DeleteSegmentByID, trhen compaction 
should be aborted and failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (CARBONDATA-2103) Avoid 2 time lookup in ShowTables command

2018-02-14 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-2103:
---

Assignee: Akash R Nilugal

> Avoid 2 time lookup in ShowTables command 
> --
>
> Key: CARBONDATA-2103
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2103
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Babulal
>Assignee: Akash R Nilugal
>Priority: Minor
>
> Currently in show table command 2 time lookup is happening . 
> Improve it with 1 lookup. 
> CarbonShowTablesCommand.scala # filterDataMaps 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2150) Unwanted updatetable status files are being generated for the delete operation where no records are deleted

2018-02-08 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2150:
---

 Summary: Unwanted updatetable status files are being generated for 
the delete operation where no records are deleted
 Key: CARBONDATA-2150
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2150
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


Unwanted updatetable status files are being generated for the delete operation 
where no records are deleted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-1947) fix select * issue after compaction, delete and clean files operation

2017-12-28 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1947:
---

 Summary: fix select * issue after compaction, delete and clean 
files operation
 Key: CARBONDATA-1947
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1947
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal
Priority: Minor


All data is deleted from compacted segment if a record is deleted and clean 
file command is run.
1: create table tt2(id int,name string) stored by 'carbondata';
2: insert into tt2 select 1,'abc';
3: insert into tt2 select 2,'pqr';
4: insert into tt2 select 3,'mno';
5: insert into tt2 select 4,'ghi'
6: Alter table tt2 compact 'minor';
7: clean files for table tt2;
8: delete from tt2 where id=3;
9: clean files for table tt2;
10: select * from tt2;

select query gives empty result



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1935) Fix the backword compatibility issue for tableInfo deserialization

2017-12-22 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-1935:
---

 Summary: Fix the backword compatibility issue for tableInfo 
deserialization
 Key: CARBONDATA-1935
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1935
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


in carbon old version datatype is string in tableInfo, now new version it is 
object with extra field, fix the issue to fix the compatibility issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-2805) Wrong order in custom compaction

2018-07-30 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2805:
---

 Summary: Wrong order in custom compaction
 Key: CARBONDATA-2805
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2805
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


when we have segments from 0 to 6 and i give 1, 2, 3 for custom compaction, 
then it should create 1.1 as compacted segment, but sometimes it will create 
3.1 as compacted segment which is wrong.

+-+-+++-+---+
|SegmentSequenceId| Status| Load Start Time| Load End Time|Merged To|File 
Format|
+-+-+++-+---+
| 4| Success|2018-07-27 07:25:...|2018-07-27 07:25:...| NA|COLUMNAR_V3|
| 3.1| Success|2018-07-27 07:25:...|2018-07-27 07:25:...| NA|COLUMNAR_V3|
| 3|Compacted|2018-07-27 07:25:...|2018-07-27 07:25:...| 3.1|COLUMNAR_V3|
| 2|Compacted|2018-07-27 07:25:...|2018-07-27 07:25:...| 3.1|COLUMNAR_V3|
| 1|Compacted|2018-07-27 07:25:...|2018-07-27 07:25:...| 3.1|COLUMNAR_V3|
| 0| Success|2018-07-27 07:25:...|2018-07-27 07:25:...| NA|COLUMNAR_V3|
+-+-+++-+---+

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2803) Wrong data size calculation and entry in table status file

2018-07-30 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2803:
---

 Summary: Wrong data size calculation and entry in table status file
 Key: CARBONDATA-2803
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2803
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


the indexFileMap contains all the blocklets and index file mapping. For 
example, if one  block contains 3 blocklets, Since all the three blocklets will 
have the same block path, it is gives m3 times more size than actal size



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2803) handle local dictionary for older tables is not proper, null pointer exption is thrown

2018-08-02 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2803:

Description: handle local dictionary for older tables is not proper, null 
pointer exption is thrown, when the older table tableproperties do not have 
local dictioanry properties, select query fails null pinter exception  (was: 
the indexFileMap contains all the blocklets and index file mapping. For 
example, if one  block contains 3 blocklets, Since all the three blocklets will 
have the same block path, it is gives m3 times more size than actal size)
Summary: handle local dictionary for older tables is not proper, null 
pointer exption is thrown  (was: Wrong data size calculation and entry in table 
status file)

> handle local dictionary for older tables is not proper, null pointer exption 
> is thrown
> --
>
> Key: CARBONDATA-2803
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2803
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> handle local dictionary for older tables is not proper, null pointer exption 
> is thrown, when the older table tableproperties do not have local dictioanry 
> properties, select query fails null pinter exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2803) data size is wrong in table status & handle local dictionary for older tables is not proper, null pointer exption is thrown

2018-08-02 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2803:

Description: 
data size was calculation wrongly,when multiple bloklets are presnt in block 
and multiple datafiles are present in segment 

 

handle local dictionary for older tables is not proper, null pointer exption is 
thrown, when the older table tableproperties do not have local dictioanry 
properties, select query fails null pinter exception

  was:handle local dictionary for older tables is not proper, null pointer 
exption is thrown, when the older table tableproperties do not have local 
dictioanry properties, select query fails null pinter exception

Summary: data size is wrong in table status & handle local dictionary 
for older tables is not proper, null pointer exption is thrown  (was: handle 
local dictionary for older tables is not proper, null pointer exption is thrown)

> data size is wrong in table status & handle local dictionary for older tables 
> is not proper, null pointer exption is thrown
> ---
>
> Key: CARBONDATA-2803
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2803
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> data size was calculation wrongly,when multiple bloklets are presnt in block 
> and multiple datafiles are present in segment 
>  
> handle local dictionary for older tables is not proper, null pointer exption 
> is thrown, when the older table tableproperties do not have local dictioanry 
> properties, select query fails null pinter exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2889) Support Decoder based fall back mechanism in Local Dictionary

2018-08-27 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2889:
---

 Summary: Support Decoder based fall back mechanism in Local 
Dictionary
 Key: CARBONDATA-2889
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2889
 Project: CarbonData
  Issue Type: Sub-task
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


Currently, when the fallback is initiated for a column page in case of local 
dictionary, we are keeping both encoded data

and actual data in memory and then we form the new column page without 
dictionary encoding and then at last we free the Encoded Column Page.

Because of this offheap memory footprint increases.

 

We can reduce the offheap memory footprint. This can be done using decoder 
based fallback mechanism.

This means, no need to keep the actual data along with encoded data in encoded 
column page. We can keep only encoded data and to form a new column page, get 
the dictionary data from encoded column page by uncompressing and using 
dictionary data get the actual data using local dictionary generator and put it 
in new column page created and compress it again and give to consumer for 
writing blocklet. 

 

The above process may slow down the loading, but it will reduces the memory 
footprint. So we can give a property which will decide whether to take current 
fallback procedure or decoder based fallback mechanism dring fallback



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2765) Handle flat folder for implicit column

2018-07-20 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2765:
---

 Summary: Handle flat folder for implicit column
 Key: CARBONDATA-2765
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2765
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


Handle flat folder for implicit column

 

for implicit column, the blocklet id was getting wrong beacuse of path, as the 
carbondata files will be present after table path in case of flat folder



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-07-16 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

create table struct_bigint1(struct1 
struct) stored by 'carbondata' 
TBLPROPERTIES ('SORT_SCOPE'='GLOBAL_SORT');
2: Load the data:
load data inpath 
'hdfs://hacluster/user/adaptivedata/struct_bigint/BigintDeltaData2.txt' into 
table struct_bigint1 options 
('delimiter'='|','fileheader'='struct1','complex_delimiter_level_1'=',','GLOBAL_SORT_PARTITIONS'='1');

 

 

32767,2147483647,922337203685477
32766,2147450880,922337203685476


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-07-16 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

create table struct_bigint1(struct1 
struct) stored by 'carbondata' 
TBLPROPERTIES ('SORT_SCOPE'='GLOBAL_SORT');
2: Load the data:
load data inpath 
'hdfs://hacluster/user/adaptivedata/struct_bigint/BigintDeltaData2.txt' into 
table struct_bigint1 options 
('delimiter'='|','fileheader'='struct1','complex_delimiter_level_1'=',','GLOBAL_SORT_PARTITIONS'='1');

 

 

32767,2147483647,922337203685477
32766,2147450880,922337203685476

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
> create table struct_bigint1(struct1 
> struct) stored by 'carbondata' 
> TBLPROPERTIES ('SORT_SCOPE'='GLOBAL_SORT');
> 2: Load the data:
> load data inpath 
> 'hdfs://hacluster/user/adaptivedata/struct_bigint/BigintDeltaData2.txt' into 
> table struct_bigint1 options 
> ('delimiter'='|','fileheader'='struct1','complex_delimiter_level_1'=',','GLOBAL_SORT_PARTITIONS'='1');
>  
>  
> 32767,2147483647,922337203685477
> 32766,2147450880,922337203685476



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2771) Block update and delete if compaction is in progress

2018-07-23 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2771:
---

 Summary: Block update and delete if compaction is in progress
 Key: CARBONDATA-2771
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2771
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal


Block update and delete if compaction is in progress, as it may leads to data 
mismatch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2740) flat folder structure is not handled for implicit column and segment file is not getting deleted after load is failed

2018-07-13 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2740:
---

 Summary: flat folder structure is not handled for implicit column 
and segment file is not getting deleted after load is failed
 Key: CARBONDATA-2740
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2740
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


flat folder structure is not handled for implicit column and segment file is 
not getting deleted after load is failed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-17 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

 

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

 

 

 

CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

 

insert into uniqdata_int_string 
partition(cust_id='1',cust_name='CUST_NAME_2') select * from uniqdata_hive 
limit 10;

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-17 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

 

 

 

CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

 

insert into uniqdata_int_string 
partition(cust_id='1',cust_name='CUST_NAME_2') select * from uniqdata_hive 
limit 10;

 

  was:
when delete is success and update is failed while writing status file  then a 
stale carbon data file is created.
so removing that file on clean up . and also not considering that one during 
query.


when the update operation is running and the user stops it abruptly,
then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
and in query also new data file should be excluded.


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
>  
>  
>  
> CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
>  
> insert into uniqdata_int_string 
> partition(cust_id='1',cust_name='CUST_NAME_2') select * from 
> uniqdata_hive limit 10;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2217) nullpointer issue drop partition where column does not exists and clean files issue after second level of compaction

2018-02-28 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2217:
---

 Summary: nullpointer issue  drop partition where column does not 
exists and clean files issue after second level of compaction
 Key: CARBONDATA-2217
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2217
 Project: CarbonData
  Issue Type: Bug
  Components: core, spark-integration
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


1)when drop partition is fired for a column which does not exists , it throws 
null pointer exception

2)select * is not working when clean files operation is fired after second 
level of compaction

create table comp_dt2(id int,name string) partitioned by (dt date,c4 int) 
stored by 'carbondata';
insert into comp_dt2 select 1,'A','2001-01-01',1;
insert into comp_dt2 select 2,'B','2001-01-01',1;
insert into comp_dt2 select 3,'C','2002-01-01',2;
insert into comp_dt2 select 4,'D','2002-01-01',null;

insert into comp_dt2 select 5,'E','2003-01-01',3;
insert into comp_dt2 select 6,'F','2003-01-01',3;
insert into comp_dt2 select 7,'G','2003-01-01',4;
insert into comp_dt2 select 8,'H','2004-01-01','';

insert into comp_dt2 select 9,'H','2001-01-01',1;
insert into comp_dt2 select 10,'I','2002-01-01',null;
insert into comp_dt2 select 11,'J','2003-01-01',4;
insert into comp_dt2 select 12,'K','2003-01-01',5;

 

clean files for table comp_dt2;

select * from comp_dt2

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2235) add system configuration to filter datamaps from show tables command

2018-03-07 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2235:
---

 Summary: add system configuration to filter datamaps from show 
tables command
 Key: CARBONDATA-2235
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2235
 Project: CarbonData
  Issue Type: Bug
  Components: docs
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


carbon.query.show.datamaps

this property is a system configuration, if it is set to true, show tables will 
list all the tables including datamaps(ex: Preaggregate) and if it is false, 
show tables will filter the datamaps and only will show the main tables



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2347) Fix Functional issues in LuceneDatamap in load and query and make stable

2018-04-13 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2347:
---

 Summary: Fix Functional issues in LuceneDatamap in load and query 
and make stable
 Key: CARBONDATA-2347
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2347
 Project: CarbonData
  Issue Type: Bug
  Components: data-load, data-query
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2347) Fix Functional issues in LuceneDatamap in load and query and make stable

2018-04-13 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2347:

Description: 
1) The index write location for the lucene is same, and to IndexWriter will 
take a lock file called write.lock in write location while writing the index 
files. In carbon loading the writer tasks are launched parallel and those many 
writers are opened,Since the write.lock file is acquired by one writer, all 
other tasks will fail and dataloading will fail.

2)in query side, read index path for lucene was in single path, but after load 
fix, there will be multiple index directories after load

functional issues in drop table, drop datamap, show datamap 

 

> Fix Functional issues in LuceneDatamap in load and query and make stable
> 
>
> Key: CARBONDATA-2347
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2347
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load, data-query
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
>
> 1) The index write location for the lucene is same, and to IndexWriter will 
> take a lock file called write.lock in write location while writing the index 
> files. In carbon loading the writer tasks are launched parallel and those 
> many writers are opened,Since the write.lock file is acquired by one writer, 
> all other tasks will fail and dataloading will fail.
> 2)in query side, read index path for lucene was in single path, but after 
> load fix, there will be multiple index directories after load
> functional issues in drop table, drop datamap, show datamap 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3025) Add SQL support for cli, and enhance CLI , add more metadata to carbon file

2018-10-17 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-3025:
---

 Summary: Add SQL support for cli, and enhance CLI , add more 
metadata to carbon file
 Key: CARBONDATA-3025
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3025
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


# support SQL integration for CLI
 # enhancle cli with more info
 # add more metadata to carbon file footer for better maintainability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3060) Improve CLI and fix other bugs in CLI tool

2018-10-30 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-3060:
---

 Summary: Improve CLI and fix other bugs in CLI tool
 Key: CARBONDATA-3060
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3060
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


1. improve the syntax for CLI DDL, now the command can be given as** 
`CarbonCli for table  options('-cmd summary/benchmark -a -s -v -c 
 -m')`

the options will take one string, which is basically a command, which user can 
directly paste into command promt and run as java command
Now user no nned to give -P also, internally when above commad is run we take 
table path into consideration in command line arguments

other issues:
1. when numeric columns are included in dictionary, min max are wrong
2. timestamp column's min and max details are wrong, showing in long value 
rather than actual timestanp format
3. help command is not working in beeline
4. complex types column min max are wrong, sometimes junk values



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3066) ADD documentation for new APIs in SDK

2018-10-31 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-3066:
---

 Summary: ADD documentation for new APIs in SDK
 Key: CARBONDATA-3066
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3066
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


ADD documentation for new APIs in SDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3065) by default disable inverted index for all the dimension column

2018-10-31 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-3065:
---

 Summary: by default disable inverted index for all the dimension 
column
 Key: CARBONDATA-3065
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3065
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


h3. Bottleneck with invertedIndex:
 # As for each page first we will sort the data and generate inverted index, 
data loading performance will get impacted.because of this
 # Store size is more because of stroing inverted index for each dimension 
column which results in more IO and it impacts query performance
 # One extra lookup happenes during query due to presence of inverted index 
which is causing many cachline miss and it impacts the query performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-3025) Add SQL support for cli, and enhance CLI , add more metadata to carbon file

2018-10-20 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-3025:

Attachment: Carbon_Maintainability.odt

> Add SQL support for cli, and enhance CLI , add more metadata to carbon file
> ---
>
> Key: CARBONDATA-3025
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3025
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
> Attachments: Carbon_Maintainability.odt
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> # support SQL integration for CLI
>  # enhancle cli with more info
>  # add more metadata to carbon file footer for better maintainability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3084) data load with float datatype falis with internal error

2018-11-05 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-3084:
---

 Summary: data load with  float datatype falis with internal error
 Key: CARBONDATA-3084
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3084
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal


when data load is triggered for float datatype and data is exceeding the float 
max range, data load fails with following error

java.lang.RuntimeException: internal error: FLOAT
 at 
org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.fitMinMax(DefaultEncodingFactory.java:179)
 at 
org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.selectCodecByAlgorithmForIntegral(DefaultEncodingFactory.java:259)
 at 
org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.selectCodecByAlgorithmForFloating(DefaultEncodingFactory.java:337)
 at 
org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.createEncoderForMeasureOrNoDictionaryPrimitive(DefaultEncodingFactory.java:130)
 at 
org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.createEncoder(DefaultEncodingFactory.java:66)
 at 
org.apache.carbondata.processing.store.TablePage.encodeAndCompressMeasures(TablePage.java:385)
 at org.apache.carbondata.processing.store.TablePage.encode(TablePage.java:372)
 at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:285)
 at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.access$500(CarbonFactDataHandlerColumnar.java:59)
 at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:583)
 at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:560)

 

 

 

 

Steps to reproduce are

create table datatype_floa_byte(f float, b byte) using carbon;
insert into datatype_floa_byte select 123.123,127;
insert into datatype_floa_byte select "1.7976931348623157E308",-127;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2953) Dataload fails when sort column is given, and query returns null value from another session

2018-09-20 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2953:

Description: 
# when dataload is done with sort columns, it fails with following exeptions

java.lang.ClassCastException: java.lang.Integer cannot be cast to [B
 at 
org.apache.carbondata.processing.sort.sortdata.IntermediateSortTempRowComparator.compare(IntermediateSortTempRowComparator.java:71)
 at 
org.apache.carbondata.processing.loading.sort.unsafe.holder.UnsafeInmemoryHolder.compareTo(UnsafeInmemoryHolder.java:71)
 at 
org.apache.carbondata.processing.loading.sort.unsafe.holder.UnsafeInmemoryHolder.compareTo(UnsafeInmemoryHolder.java:26)
 at java.util.PriorityQueue.siftUpComparable(PriorityQueue.java:656)
 at java.util.PriorityQueue.siftUp(PriorityQueue.java:647)
 at java.util.PriorityQueue.offer(PriorityQueue.java:344)
 at java.util.PriorityQueue.add(PriorityQueue.java:321)
 at 
org.apache.carbondata.processing.loading.sort.unsafe.merger.UnsafeSingleThreadFinalSortFilesMerger.startSorting(UnsafeSingleThreadFinalSortFilesMerger.java:129)
 at 
org.apache.carbondata.processing.loading.sort.unsafe.merger.UnsafeSingleThreadFinalSortFilesMerger.startFinalMerge(UnsafeSingleThreadFinalSortFilesMerger.java:94)
 at 
org.apache.carbondata.processing.loading.sort.impl.UnsafeParallelReadMergeSorterImpl.sort(UnsafeParallelReadMergeSorterImpl.java:110)
 at 
org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:55)
 at 
org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:112)
 at 
org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:51)
 at 
org.apache.carbondata.spark.rdd.NewCarbonDataLoadRDD$$anon$1.(NewCarbonDataLoadRDD.scala:212)
 at 
org.apache.carbondata.spark.rdd.NewCarbonDataLoadRDD.internalCompute(NewCarbonDataLoadRDD.scala:188)
 at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
 # when two sessions are running in parallel, the follow below steps in session1
 ## drop table
 ## create table
 ## load data to table
 # follow below step in session2
 ## query on table(select * from table limit 1), then the query returns null 
result instead for proper result

  was:
# when dataload is done with sort columns, it fails with following exeptions
 # when two sessions are running in parallel, the follow below steps in session1
 ## drop table
 ## create table
 ## load data to table
 # follow below step in session2
 ## query on table(select * from table limit 1), then the query returns null 
result instead for proper result


> Dataload fails when sort column is given, and query returns null value from 
> another session
> ---
>
> Key: CARBONDATA-2953
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2953
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>
> # when dataload is done with sort columns, it fails with following exeptions
> java.lang.ClassCastException: java.lang.Integer cannot be cast to [B
>  at 
> org.apache.carbondata.processing.sort.sortdata.IntermediateSortTempRowComparator.compare(IntermediateSortTempRowComparator.java:71)
>  at 
> org.apache.carbondata.processing.loading.sort.unsafe.holder.UnsafeInmemoryHolder.compareTo(UnsafeInmemoryHolder.java:71)
>  at 
> org.apache.carbondata.processing.loading.sort.unsafe.holder.UnsafeInmemoryHolder.compareTo(UnsafeInmemoryHolder.java:26)
>  at java.util.PriorityQueue.siftUpComparable(PriorityQueue.java:656)
>  at java.util.PriorityQueue.siftUp(PriorityQueue.java:647)
>  at java.util.PriorityQueue.offer(PriorityQueue.java:344)
>  at java.util.PriorityQueue.add(PriorityQueue.java:321)
>  at 
> org.apache.carbondata.processing.loading.sort.unsafe.merger.UnsafeSingleThreadFinalSortFilesMerger.startSorting(UnsafeSingleThreadFinalSortFilesMerger.java:129)
>  at 
> org.apache.carbondata.processing.loading.sort.unsafe.merger.UnsafeSingleThreadFinalSortFilesMerger.startFinalMerge(UnsafeSingleThreadFinalSortFilesMerger.java:94)
>  at 
> org.apache.carbondata.processing.loading.sort.impl.UnsafeParallelReadMergeSorterImpl.sort(UnsafeParallelReadMergeSorterImpl.java:110)
>  at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:55)
>  at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:112)
>  at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:51)
>  at 
> 

[jira] [Created] (CARBONDATA-2950) Alter table add columns fails for hive table in carbon session for spark version above 2.1

2018-09-19 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2950:
---

 Summary: Alter table add columns fails for hive table in carbon 
session for spark version above 2.1
 Key: CARBONDATA-2950
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2950
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


spark does not support add columns in spark-2.1, but it is supported in 2.2 and 
above

when add column is fired for hive table in carbon session, for spark -version 
above 2.1, it throws error as unsupported operation on hive table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2969) Query on local dictionary column is giving empty data

2018-09-25 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2969:
---

 Summary: Query on local dictionary column is giving empty data
 Key: CARBONDATA-2969
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2969
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


Spark-2.3

 

when local dictionary is enabled for a column in spark-2.3, query on that 
column always gives empty data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2970) Basic queries like drop table and load are not working in ViewFS

2018-09-25 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2970:
---

 Summary: Basic queries like drop table and load are not working in 
ViewFS
 Key: CARBONDATA-2970
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2970
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


when default fs is set to ViewFS then the drop table and load fails with 
follwing exception

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2953) Dataload fails when sort column is given, and query returns null value from another session

2018-09-20 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2953:
---

 Summary: Dataload fails when sort column is given, and query 
returns null value from another session
 Key: CARBONDATA-2953
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2953
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


# when dataload is done with sort columns, it fails with following exeptions
 # when two sessions are running in parallel, the follow below steps in session1
 ## drop table
 ## create table
 ## load data to table
 # follow below step in session2
 ## query on table(select * from table limit 1), then the query returns null 
result instead for proper result



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2970) Basic queries like drop table and load are not working in ViewFS

2018-09-25 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2970:

Description: 
when default fs is set to ViewFS then the drop table and load fails with 
follwing exception

org.apache.carbondata.spark.exception.ProcessMetaDataException: operation 
failed for default.tb: Dropping table default.tb failed: Acquire table lock 
failed after retry, please try after some time
 at 
org.apache.spark.sql.execution.command.MetadataProcessOpeation$class.throwMetadataException(package.scala:52)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand.throwMetadataException(package.scala:86)
 at 
org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:157)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:90)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:71)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:69)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:80)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
 at org.apache.spark.sql.Dataset.(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
 at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
 at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:245)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:177)

 

  was:
when default fs is set to ViewFS then the drop table and load fails with 
follwing exception

 


> Basic queries like drop table and load are not working in ViewFS
> 
>
> Key: CARBONDATA-2970
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2970
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when default fs is set to ViewFS then the drop table and load fails with 
> follwing exception
> org.apache.carbondata.spark.exception.ProcessMetaDataException: operation 
> failed for default.tb: Dropping table default.tb failed: Acquire table lock 
> failed after retry, please try after some time
>  at 
> org.apache.spark.sql.execution.command.MetadataProcessOpeation$class.throwMetadataException(package.scala:52)
>  at 
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.throwMetadataException(package.scala:86)
>  at 
> org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:157)
>  at 
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:90)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:71)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:69)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:80)
>  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
>  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
>  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
>  at org.apache.spark.sql.Dataset.(Dataset.scala:190)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
>  at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:245)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:177)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-3149) Support alter table column rename

2018-12-12 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-3149:

Attachment: Alter table column rename.odt

> Support alter table column rename
> -
>
> Key: CARBONDATA-3149
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3149
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
> Attachments: Alter table column rename.odt
>
>
> Please find the mailing list link for the same 
> [http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Discussion-Alter-table-column-rename-feature-tt69814.html|http://example.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CARBONDATA-3149) Support alter table column rename

2018-12-12 Thread Akash R Nilugal (JIRA)


[ 
https://issues.apache.org/jira/browse/CARBONDATA-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719760#comment-16719760
 ] 

Akash R Nilugal commented on CARBONDATA-3149:
-

Design document is attached

> Support alter table column rename
> -
>
> Key: CARBONDATA-3149
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3149
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
> Attachments: Alter table column rename.odt
>
>
> Please find the mailing list link for the same 
> [http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Discussion-Alter-table-column-rename-feature-tt69814.html|http://example.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-3202) updated schema is not updated in session catalog after add, drop or rename column.

2018-12-26 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-3202:

Description: 
updated schema is not updated in session catalog after add, drop or rename 
column. 

 

Spark does not support drop column , rename column, and supports add column 
from spark2.2 onwards, so after rename, or add or drop column, the new updated 
schema is not updated in catalog

  was:
updated schema is not updated in session catalog after add, drop or rename 
column. 

 

Soark does not support drop column , rename column, and supports add column 
from spark2.2 onwards, so after rename, or add or drop column, the new updated 
schema is not updated in catalog


> updated schema is not updated in session catalog after add, drop or rename 
> column. 
> ---
>
> Key: CARBONDATA-3202
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3202
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Priority: Minor
>
> updated schema is not updated in session catalog after add, drop or rename 
> column. 
>  
> Spark does not support drop column , rename column, and supports add column 
> from spark2.2 onwards, so after rename, or add or drop column, the new 
> updated schema is not updated in catalog



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   >