[GitHub] carbondata issue #1559: [CARBONDATA-1805][Dictionary] Optimize pruning for d...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1559
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1885/



---


[jira] [Updated] (CARBONDATA-1851) Refactor to use only SegmentsToAccess for Aggregatetable, move tableFolderDeletion to TableProcessingOperations

2017-12-11 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1851:
-
Summary: Refactor to use only SegmentsToAccess for Aggregatetable, move 
tableFolderDeletion to TableProcessingOperations   (was: Refactor to remove )

> Refactor to use only SegmentsToAccess for Aggregatetable, move 
> tableFolderDeletion to TableProcessingOperations 
> 
>
> Key: CARBONDATA-1851
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1851
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Minor
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> 1) CarbonTableInputFormat: Removed AgregateSegmentsToAccess interface as 
> SegmentsToAccess Interface already does the same job.
> Fix NullPointerException when SegmentsToAccess is configured and 
> validateSegments flag is set false.
> 2) moved tablefolderdeletion logic to  TableProcessingOperations to make 
> those APIs developerAPI
> 3) Added a setter to CarbonProperties to add extra properties
> 4) Added a getter getImplicitDimensionByTableName



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1851) Refactor to remove

2017-12-11 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1851:
-
Summary: Refactor to remove   (was: refactor code for better usablity to 
external system)

> Refactor to remove 
> ---
>
> Key: CARBONDATA-1851
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1851
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Minor
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> 1) CarbonTableInputFormat: Removed AgregateSegmentsToAccess interface as 
> SegmentsToAccess Interface already does the same job.
> Fix NullPointerException when SegmentsToAccess is configured and 
> validateSegments flag is set false.
> 2) moved tablefolderdeletion logic to  TableProcessingOperations to make 
> those APIs developerAPI
> 3) Added a setter to CarbonProperties to add extra properties
> 4) Added a getter getImplicitDimensionByTableName



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1851) refactor code for better usablity to external system

2017-12-11 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1851:
-
Description: 
1) CarbonTableInputFormat: Removed AgregateSegmentsToAccess interface as 
SegmentsToAccess Interface already does the same job.
Fix NullPointerException when SegmentsToAccess is configured and 
validateSegments flag is set false.
2) moved tablefolderdeletion logic to  TableProcessingOperations to make those 
APIs developerAPI
3) Added a setter to CarbonProperties to add extra properties
4) Added a getter getImplicitDimensionByTableName

> refactor code for better usablity to external system
> 
>
> Key: CARBONDATA-1851
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1851
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> 1) CarbonTableInputFormat: Removed AgregateSegmentsToAccess interface as 
> SegmentsToAccess Interface already does the same job.
> Fix NullPointerException when SegmentsToAccess is configured and 
> validateSegments flag is set false.
> 2) moved tablefolderdeletion logic to  TableProcessingOperations to make 
> those APIs developerAPI
> 3) Added a setter to CarbonProperties to add extra properties
> 4) Added a getter getImplicitDimensionByTableName



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1851) refactor code for better usablity to external system

2017-12-11 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1851:
-
Priority: Minor  (was: Major)

> refactor code for better usablity to external system
> 
>
> Key: CARBONDATA-1851
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1851
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Minor
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> 1) CarbonTableInputFormat: Removed AgregateSegmentsToAccess interface as 
> SegmentsToAccess Interface already does the same job.
> Fix NullPointerException when SegmentsToAccess is configured and 
> validateSegments flag is set false.
> 2) moved tablefolderdeletion logic to  TableProcessingOperations to make 
> those APIs developerAPI
> 3) Added a setter to CarbonProperties to add extra properties
> 4) Added a getter getImplicitDimensionByTableName



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1627: [CARBONDATA-1759]make visibility of segments as fals...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1627
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/655/



---


[GitHub] carbondata issue #1559: [CARBONDATA-1805][Dictionary] Optimize pruning for d...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1559
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/654/



---


[GitHub] carbondata issue #1632: [CARBONDATA-1839] [DataLoad]Fix bugs in compressing ...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1632
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1884/



---


[jira] [Commented] (CARBONDATA-1755) Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert overwrite and update job concurrently.

2017-12-11 Thread Kushal Sah (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287211#comment-16287211
 ] 

Kushal Sah commented on CARBONDATA-1755:


Username:   Kushal1988
Full Name:  Kushal Sah


> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> -
>
> Key: CARBONDATA-1755
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1755
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: dfx
>
> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> updated data will be overwritten by insert overwrite job. So there is no 
> meaning of running update job if insert overwrite is in progress.
> Steps:
> 1: Create a table
> 2: Do a data load
> 3: run insert overwrite job.
> 4: run a update job while overwrite job is still running.
> 5: Observe that update job is finished and after that overwrite job is also 
> finished.
> 6: All previous segments are marked for delete and there is no impact of 
> update job. Update job will use the resources unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1882) select a table with 'group by' and perform insert overwrite to another carbon table it fails

2017-12-11 Thread Kushal Sah (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287209#comment-16287209
 ] 

Kushal Sah commented on CARBONDATA-1882:


Can this issue be assigned to me

> select a table with 'group by' and perform insert overwrite to another carbon 
> table it fails
> 
>
> Key: CARBONDATA-1882
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1882
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kushal Sah
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1755) Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert overwrite and update job concurrently.

2017-12-11 Thread Kushal Sah (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287208#comment-16287208
 ] 

Kushal Sah commented on CARBONDATA-1755:


Can this issue be assigned to me

> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> -
>
> Key: CARBONDATA-1755
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1755
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: dfx
>
> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> updated data will be overwritten by insert overwrite job. So there is no 
> meaning of running update job if insert overwrite is in progress.
> Steps:
> 1: Create a table
> 2: Do a data load
> 3: run insert overwrite job.
> 4: run a update job while overwrite job is still running.
> 5: Observe that update job is finished and after that overwrite job is also 
> finished.
> 6: All previous segments are marked for delete and there is no impact of 
> update job. Update job will use the resources unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1755) Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert overwrite and update job concurrently.

2017-12-11 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1755:
---

Assignee: (was: Kushal)

> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> -
>
> Key: CARBONDATA-1755
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1755
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: dfx
>
> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> updated data will be overwritten by insert overwrite job. So there is no 
> meaning of running update job if insert overwrite is in progress.
> Steps:
> 1: Create a table
> 2: Do a data load
> 3: run insert overwrite job.
> 4: run a update job while overwrite job is still running.
> 5: Observe that update job is finished and after that overwrite job is also 
> finished.
> 6: All previous segments are marked for delete and there is no impact of 
> update job. Update job will use the resources unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1755) Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert overwrite and update job concurrently.

2017-12-11 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal reassigned CARBONDATA-1755:
---

Assignee: Kushal

> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> -
>
> Key: CARBONDATA-1755
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1755
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Assignee: Kushal
>Priority: Minor
>  Labels: dfx
>
> Carbon1.3.0 Concurrent Insert overwrite-update: User is able to run insert 
> overwrite and update job concurrently.
> updated data will be overwritten by insert overwrite job. So there is no 
> meaning of running update job if insert overwrite is in progress.
> Steps:
> 1: Create a table
> 2: Do a data load
> 3: run insert overwrite job.
> 4: run a update job while overwrite job is still running.
> 5: Observe that update job is finished and after that overwrite job is also 
> finished.
> 6: All previous segments are marked for delete and there is no impact of 
> update job. Update job will use the resources unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1601: [CARBONDATA-1787] Validation for table properties in...

2017-12-11 Thread kunal642
Github user kunal642 commented on the issue:

https://github.com/apache/carbondata/pull/1601
  
We will not validate the create tbl properties as the user can define his 
own properties as well.
Please close this


---


[GitHub] carbondata issue #1632: [CARBONDATA-1839] [DataLoad]Fix bugs in compressing ...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1632
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/653/



---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1883/



---


[jira] [Assigned] (CARBONDATA-1787) Carbon 1.3.0- Global Sort: Global_Sort_Partitions parameter doesn't work, if specified in the Tblproperties, while creating the table.

2017-12-11 Thread anubhav tarar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anubhav tarar reassigned CARBONDATA-1787:
-

Assignee: anubhav tarar

> Carbon 1.3.0- Global Sort: Global_Sort_Partitions parameter doesn't work, if 
> specified in the Tblproperties, while creating the table.
> --
>
> Key: CARBONDATA-1787
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1787
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
>Reporter: Ayushi Sharma
>Assignee: anubhav tarar
>Priority: Minor
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Steps:
> 1. create table tstcust(c_custkey int, c_name string, c_address string, 
> c_nationkey bigint, c_phone string,c_acctbal decimal, c_mktsegment string, 
> c_comment string) STORED BY 'org.apache.carbondata.format' 
> tblproperties('sort_scope'='global_sort','GLOBAL_SORT_PARTITIONS'='2');
> Issue: 
> Global_Sort_Partitions when specified during creation of table, it doesn't 
> work, whereas the same property works, if specified during the data load. 
> Expected:
> Either it should throw error for the property if it is specified in the load 
> like it throws for the sort_scope or the same thing should be updated in the 
> document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (CARBONDATA-1828) Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully.

2017-12-11 Thread Ayushi Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayushi Sharma resolved CARBONDATA-1828.
---
   Resolution: Fixed
Fix Version/s: 1.3.0

BAD_RECORDS_ACTION has been changed to FAIL by default.

> Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully. 
> -
>
> Key: CARBONDATA-1828
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1828
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ayushi Sharma
>Assignee: dhatchayani
> Fix For: 1.3.0
>
>
> 1. CREATE TABLE test3 (ID int,CUST_ID int,cust_name string) STORED BY 
> 'org.apache.carbondata.format'
> 2. LOAD DATA INPATH 'hdfs://hacluster/BabuStore/Data/InsertData/test3.csv' 
> into table test3 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','FILEHEADER'='ID,CUST_ID,Cust_name')
> The test case should have been failed, since the CSV is empty, but the data 
> load was successful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (CARBONDATA-1829) Carbon 1.3.0 - Spark 2.2: Insert is passing when Hive is having Float and Carbon is having INT value and load file is having single precision decimal value

2017-12-11 Thread Ayushi Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayushi Sharma resolved CARBONDATA-1829.
---
   Resolution: Fixed
Fix Version/s: 1.3.0

BAD_RECORDS_ACTION is set to FAIL by default.. 

> Carbon 1.3.0 - Spark 2.2: Insert is passing when Hive is having Float and 
> Carbon is having INT value and load file is having single precision decimal 
> value
> ---
>
> Key: CARBONDATA-1829
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1829
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Ayushi Sharma
> Fix For: 1.3.0
>
> Attachments: Hive3.csv
>
>
> Steps:
> 1. create table Hive3(Sell_price FLOAT, Item_code STRING, Qty_total 
> Double,Profit Decimal(4,3), Update_time TIMESTAMP )row format delimited 
> fields terminated by ',' collection items terminated by '$'
> 2. create table Carbon3(Sell_price INT, Item_code STRING, Qty_total 
> DECIMAL(3,1),Profit  DECIMAL(3,2), Update_time TIMESTAMP ) STORED BY 
> 'org.apache.carbondata.format'
> 3. load data LOCAL INPATH '/opt/csv/Data/InsertData/Hive3.csv' overwrite into 
> table Hive3
> Issue:
> Insert is passing when Hive is having Float and Carbon is having INT value 
> and load file is having single precision decimal value. This should be failed.
> Expected:
> It should be failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1715) Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the Document link.

2017-12-11 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani reassigned CARBONDATA-1715:
---

Assignee: dhatchayani

> Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the Document 
> link.
> 
>
> Key: CARBONDATA-1715
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1715
> Project: CarbonData
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Assignee: dhatchayani
>Priority: Minor
>  Labels: Document
> Attachments: Bad_Records.PNG
>
>
> By default the BAD_RECORDS_ACTION = FORCE should be written in 
> "http://carbondata.apache.org/dml-operation-on-carbondata.html; document link 
> but it is written as BAD_RECORDS_ACTION = FAIL.
> Expected result: BAD_RECORDS_ACTION = FORCE should be mentioned BAD RECORDS 
> HANDLING section in document.
> Actual issue: BAD_RECORDS_ACTION = FAIL is present in the Document link.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1715) Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the Document link.

2017-12-11 Thread dhatchayani (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287197#comment-16287197
 ] 

dhatchayani commented on CARBONDATA-1715:
-

BAD_RECORDS_ACTION is changed to FAIL now as per below PR,
https://github.com/apache/carbondata/pull/1574

[~revathip] please resolve this issue

> Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the Document 
> link.
> 
>
> Key: CARBONDATA-1715
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1715
> Project: CarbonData
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
>  Labels: Document
> Attachments: Bad_Records.PNG
>
>
> By default the BAD_RECORDS_ACTION = FORCE should be written in 
> "http://carbondata.apache.org/dml-operation-on-carbondata.html; document link 
> but it is written as BAD_RECORDS_ACTION = FAIL.
> Expected result: BAD_RECORDS_ACTION = FORCE should be mentioned BAD RECORDS 
> HANDLING section in document.
> Actual issue: BAD_RECORDS_ACTION = FAIL is present in the Document link.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CARBONDATA-1828) Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully.

2017-12-11 Thread dhatchayani (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287194#comment-16287194
 ] 

dhatchayani edited comment on CARBONDATA-1828 at 12/12/17 6:45 AM:
---

Solved by,
https://github.com/apache/carbondata/pull/1574

[~Ayushi_22] please resolve this issue


was (Author: dhatchayani):
Solved by,
https://github.com/apache/carbondata/pull/1574

> Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully. 
> -
>
> Key: CARBONDATA-1828
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1828
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ayushi Sharma
>Assignee: dhatchayani
>
> 1. CREATE TABLE test3 (ID int,CUST_ID int,cust_name string) STORED BY 
> 'org.apache.carbondata.format'
> 2. LOAD DATA INPATH 'hdfs://hacluster/BabuStore/Data/InsertData/test3.csv' 
> into table test3 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','FILEHEADER'='ID,CUST_ID,Cust_name')
> The test case should have been failed, since the CSV is empty, but the data 
> load was successful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1828) Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully.

2017-12-11 Thread dhatchayani (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287194#comment-16287194
 ] 

dhatchayani commented on CARBONDATA-1828:
-

Solved by,
https://github.com/apache/carbondata/pull/1574

> Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully. 
> -
>
> Key: CARBONDATA-1828
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1828
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ayushi Sharma
>Assignee: dhatchayani
>
> 1. CREATE TABLE test3 (ID int,CUST_ID int,cust_name string) STORED BY 
> 'org.apache.carbondata.format'
> 2. LOAD DATA INPATH 'hdfs://hacluster/BabuStore/Data/InsertData/test3.csv' 
> into table test3 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','FILEHEADER'='ID,CUST_ID,Cust_name')
> The test case should have been failed, since the CSV is empty, but the data 
> load was successful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1828) Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully.

2017-12-11 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani reassigned CARBONDATA-1828:
---

Assignee: dhatchayani

> Carbon 1.3.0 - Spark 2.2 Empty CSV is being loaded successfully. 
> -
>
> Key: CARBONDATA-1828
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1828
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ayushi Sharma
>Assignee: dhatchayani
>
> 1. CREATE TABLE test3 (ID int,CUST_ID int,cust_name string) STORED BY 
> 'org.apache.carbondata.format'
> 2. LOAD DATA INPATH 'hdfs://hacluster/BabuStore/Data/InsertData/test3.csv' 
> into table test3 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','FILEHEADER'='ID,CUST_ID,Cust_name')
> The test case should have been failed, since the CSV is empty, but the data 
> load was successful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1643: [CARBONDATA-1883] Improvement in merge index ...

2017-12-11 Thread dhatchayani
GitHub user dhatchayani opened a pull request:

https://github.com/apache/carbondata/pull/1643

[CARBONDATA-1883] Improvement in merge index code

(1) Improved merge index code
(2) Added trigger point for merge index

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
   Manual Testing
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhatchayani/incubator-carbondata 
improve_mergeIndex

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1643.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1643


commit 85bea4b2c70c0ed5d58eea927cf5bd120e2ec6ec
Author: dhatchayani 
Date:   2017-12-12T06:32:15Z

[CARBONDATA-1883] Improvement in merge index code




---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/652/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1882/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/651/



---


[jira] [Created] (CARBONDATA-1883) Improvement in merge index code

2017-12-11 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1883:
---

 Summary: Improvement in merge index code
 Key: CARBONDATA-1883
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1883
 Project: CarbonData
  Issue Type: Improvement
Reporter: dhatchayani
Assignee: dhatchayani
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1559: [CARBONDATA-1805][Dictionary] Optimize pruning for d...

2017-12-11 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/1559
  
retest this please


---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
retest this please


---


[GitHub] carbondata pull request #1641: [CARBONDATA-1882] select with group by and in...

2017-12-11 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1641#discussion_r156276866
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
 ---
@@ -486,6 +486,21 @@ object CarbonDataRDDFactory {
   // if segment is empty then fail the data load
   if 
(!carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable.isChildDataMap &&
   !CarbonLoaderUtil.isValidSegment(carbonLoadModel, 
carbonLoadModel.getSegmentId.toInt)) {
+
+if (overwriteTable && dataFrame.isDefined) {
+  carbonLoadModel.getLoadMetadataDetails.asScala.foreach {
+loadDetails =>
+  if 
(loadDetails.getSegmentStatus.equals(SegmentStatus.SUCCESS)) {
+
loadDetails.setSegmentStatus(SegmentStatus.MARKED_FOR_DELETE)
+  }
+  }
+  val carbonTablePath = CarbonStorePath
--- End diff --

1) loadTablePreStatusUpdateEvent is not fired,
2) how about old dictionary to be overwritten?
3) updatestatus file also needs to be handled accordingly.
Suggest to flow the original flow handling empty segment case


---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
retest this please


---


[GitHub] carbondata pull request #1627: [CARBONDATA-1759]make visibility of segments ...

2017-12-11 Thread manishgupta88
Github user manishgupta88 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1627#discussion_r156275385
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/util/DeleteLoadFolders.java
 ---
@@ -122,26 +122,21 @@ private static boolean 
checkIfLoadCanBeDeleted(LoadMetadataDetails oneLoad,
 return false;
   }
 
-  public static boolean deleteLoadFoldersFromFileSystem(
+  public static void deleteLoadFoldersFromFileSystem(
   AbsoluteTableIdentifier absoluteTableIdentifier, boolean 
isForceDelete,
   LoadMetadataDetails[] details) {
-boolean isDeleted = false;
 
 if (details != null && details.length != 0) {
   for (LoadMetadataDetails oneLoad : details) {
 if (checkIfLoadCanBeDeleted(oneLoad, isForceDelete)) {
   String path = getSegmentPath(absoluteTableIdentifier, 0, 
oneLoad);
-  boolean deletionStatus = 
physicalFactAndMeasureMetadataDeletion(path);
-  if (deletionStatus) {
-isDeleted = true;
-oneLoad.setVisibility("false");
-LOGGER.info("Info: Deleted the load " + oneLoad.getLoadName());
-  }
+  physicalFactAndMeasureMetadataDeletion(path);
--- End diff --

While handling deletion in this method return true in case file does not 
exist and add a warning logger. As we dont have any other mechanism for clean 
up it will be good if we keep all the remaining code same and only set the 
status as true in case of non existence of file


---


[GitHub] carbondata pull request #1641: [CARBONDATA-1882] select with group by and in...

2017-12-11 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1641#discussion_r156275284
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
 ---
@@ -486,6 +486,21 @@ object CarbonDataRDDFactory {
   // if segment is empty then fail the data load
--- End diff --

Correct comment


---


[jira] [Resolved] (CARBONDATA-1869) (Carbon1.3.0 - Spark 2.2) Null pointer exception thrown when concurrent load and select queries executed for table with dictionary exclude or NO_INVERTED_INDEX

2017-12-11 Thread Manish Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manish Gupta resolved CARBONDATA-1869.
--
Resolution: Fixed

> (Carbon1.3.0 - Spark 2.2) Null pointer exception thrown when concurrent load 
> and select queries executed for table with dictionary exclude or 
> NO_INVERTED_INDEX
> ---
>
> Key: CARBONDATA-1869
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1869
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: Chetan Bhat
>Assignee: dhatchayani
>  Labels: DFX
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Steps -
> From beeline terminal a table is created with with table properties having 
> dictionary exclude or NO_INVERTED_INDEX- 
> create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('DICTIONARY_EXCLUDE'='a1');   or 
> create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('NO_INVERTED_INDEX'='a1');
>   From 3 concurrent beeline terminals the below sequence of insert into 
> select and select queries are executed 120 times.
>   insert into test select 2147483647;
>   select * from test;
>   select count(*) from test;
>   select a1 from test;
>   select 
> round(a1),bround(a1),floor(a1),ceil(a1),rand(),exp(a1),ln(a1),log10(a1),log2(1),log(a1),pow(a1,a1),sqrt(a1),bin(a1),pmod(a1,a1),sin(a1),asin(a1),cos(a1),tan(a1),atan(a1),degrees(a1),radians(a1),positive(a1),negative(a1),sign(a1),factorial(a1),cbrt(a1)
>  from test;
> 【Expected Output】:The insert into select query should be successful and the 
> null pointer exception should not be thrown when concurrent load and select 
> queries executed for table with dictionary exclude or NO_INVERTED_INDEX.
> 【Actual Output】:Null pointer exception thrown when concurrent load and select 
> queries executed with table properties having dictionary exclude or 
> NO_INVERTED_INDEX
>   0: jdbc:hive2://10.18.98.136:23040> insert into test select 2147483647;
> Error: java.lang.NullPointerException (state=,code=0)
> *+{color:red}Stacktrace:{color}+*
> java.lang.NullPointerException
>   at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.delete(AbstractDFSCarbonFile.java:152)
>   at 
> org.apache.carbondata.processing.util.DeleteLoadFolders.physicalFactAndMeasureMetadataDeletion(DeleteLoadFolders.java:90)
>   at 
> org.apache.carbondata.processing.util.DeleteLoadFolders.deleteLoadFoldersFromFileSystem(DeleteLoadFolders.java:134)
>   at 
> org.apache.carbondata.spark.rdd.DataManagementFunc$.deleteLoadsAndUpdateMetadata(DataManagementFunc.scala:187)
>   at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:296)
>   at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:362)
>   at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:193)
>   at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.run(CarbonLoadDataCommand.scala:65)
>   at 
> org.apache.spark.sql.execution.command.management.CarbonInsertIntoCommand.processData(CarbonInsertIntoCommand.scala:43)
>   at 
> org.apache.spark.sql.execution.command.DataCommand.run(package.scala:71)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
>   at org.apache.spark.sql.Dataset.(Dataset.scala:182)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
>   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1625: [CARBONDATA-1869] Null pointer exception thro...

2017-12-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1625


---


[jira] [Updated] (CARBONDATA-1869) (Carbon1.3.0 - Spark 2.2) Null pointer exception thrown when concurrent load and select queries executed for table with dictionary exclude or NO_INVERTED_INDEX

2017-12-11 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani updated CARBONDATA-1869:

Description: 
Steps -
>From beeline terminal a table is created with with table properties having 
>dictionary exclude or NO_INVERTED_INDEX- 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('DICTIONARY_EXCLUDE'='a1');   or 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('NO_INVERTED_INDEX'='a1');

  From 3 concurrent beeline terminals the below sequence of insert into select 
and select queries are executed 120 times.
  insert into test select 2147483647;
  select * from test;
  select count(*) from test;
  select a1 from test;
  select 
round(a1),bround(a1),floor(a1),ceil(a1),rand(),exp(a1),ln(a1),log10(a1),log2(1),log(a1),pow(a1,a1),sqrt(a1),bin(a1),pmod(a1,a1),sin(a1),asin(a1),cos(a1),tan(a1),atan(a1),degrees(a1),radians(a1),positive(a1),negative(a1),sign(a1),factorial(a1),cbrt(a1)
 from test;

【Expected Output】:The insert into select query should be successful and the 
null pointer exception should not be thrown when concurrent load and select 
queries executed for table with dictionary exclude or NO_INVERTED_INDEX.

【Actual Output】:Null pointer exception thrown when concurrent load and select 
queries executed with table properties having dictionary exclude or 
NO_INVERTED_INDEX
  0: jdbc:hive2://10.18.98.136:23040> insert into test select 2147483647;
Error: java.lang.NullPointerException (state=,code=0)




*+{color:red}Stacktrace:{color}+*
java.lang.NullPointerException
at 
org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.delete(AbstractDFSCarbonFile.java:152)
at 
org.apache.carbondata.processing.util.DeleteLoadFolders.physicalFactAndMeasureMetadataDeletion(DeleteLoadFolders.java:90)
at 
org.apache.carbondata.processing.util.DeleteLoadFolders.deleteLoadFoldersFromFileSystem(DeleteLoadFolders.java:134)
at 
org.apache.carbondata.spark.rdd.DataManagementFunc$.deleteLoadsAndUpdateMetadata(DataManagementFunc.scala:187)
at 
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:296)
at 
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:362)
at 
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:193)
at 
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.run(CarbonLoadDataCommand.scala:65)
at 
org.apache.spark.sql.execution.command.management.CarbonInsertIntoCommand.processData(CarbonInsertIntoCommand.scala:43)
at 
org.apache.spark.sql.execution.command.DataCommand.run(package.scala:71)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
at org.apache.spark.sql.Dataset.(Dataset.scala:182)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)

  was:
Steps -
>From beeline terminal a table is created with with table properties having 
>dictionary exclude or NO_INVERTED_INDEX- 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('DICTIONARY_EXCLUDE'='a1');   or 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('NO_INVERTED_INDEX'='a1');

  From 3 concurrent beeline terminals the below sequence of insert into select 
and select queries are executed 120 times.
  insert into test select 2147483647;
  select * from test;
  select count(*) from test;
  select a1 from test;
  select 
round(a1),bround(a1),floor(a1),ceil(a1),rand(),exp(a1),ln(a1),log10(a1),log2(1),log(a1),pow(a1,a1),sqrt(a1),bin(a1),pmod(a1,a1),sin(a1),asin(a1),cos(a1),tan(a1),atan(a1),degrees(a1),radians(a1),positive(a1),negative(a1),sign(a1),factorial(a1),cbrt(a1)
 from test;

【Expected Output】:The insert into select query should be successful and the 
null pointer exception should not be thrown when concurrent load and select 
queries executed for table with dictionary exclude or NO_INVERTED_INDEX.

【Actual Output】:Null pointer exception thrown when concurrent load and select 
queries executed with table properties having dictionary exclude or 
NO_INVERTED_INDEX
  0: jdbc:hive2://10.18.98.136:23040> insert into test select 2147483647;
Error: java.lang.NullPointerException (state=,code=0)




Stacktrace:
java.lang.NullPointerException
at 

[jira] [Updated] (CARBONDATA-1869) (Carbon1.3.0 - Spark 2.2) Null pointer exception thrown when concurrent load and select queries executed for table with dictionary exclude or NO_INVERTED_INDEX

2017-12-11 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani updated CARBONDATA-1869:

Description: 
Steps -
>From beeline terminal a table is created with with table properties having 
>dictionary exclude or NO_INVERTED_INDEX- 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('DICTIONARY_EXCLUDE'='a1');   or 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('NO_INVERTED_INDEX'='a1');

  From 3 concurrent beeline terminals the below sequence of insert into select 
and select queries are executed 120 times.
  insert into test select 2147483647;
  select * from test;
  select count(*) from test;
  select a1 from test;
  select 
round(a1),bround(a1),floor(a1),ceil(a1),rand(),exp(a1),ln(a1),log10(a1),log2(1),log(a1),pow(a1,a1),sqrt(a1),bin(a1),pmod(a1,a1),sin(a1),asin(a1),cos(a1),tan(a1),atan(a1),degrees(a1),radians(a1),positive(a1),negative(a1),sign(a1),factorial(a1),cbrt(a1)
 from test;

【Expected Output】:The insert into select query should be successful and the 
null pointer exception should not be thrown when concurrent load and select 
queries executed for table with dictionary exclude or NO_INVERTED_INDEX.

【Actual Output】:Null pointer exception thrown when concurrent load and select 
queries executed with table properties having dictionary exclude or 
NO_INVERTED_INDEX
  0: jdbc:hive2://10.18.98.136:23040> insert into test select 2147483647;
Error: java.lang.NullPointerException (state=,code=0)




Stacktrace:
java.lang.NullPointerException
at 
org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.delete(AbstractDFSCarbonFile.java:152)
at 
org.apache.carbondata.processing.util.DeleteLoadFolders.physicalFactAndMeasureMetadataDeletion(DeleteLoadFolders.java:90)
at 
org.apache.carbondata.processing.util.DeleteLoadFolders.deleteLoadFoldersFromFileSystem(DeleteLoadFolders.java:134)
at 
org.apache.carbondata.spark.rdd.DataManagementFunc$.deleteLoadsAndUpdateMetadata(DataManagementFunc.scala:187)
at 
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:296)
at 
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:362)
at 
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:193)
at 
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.run(CarbonLoadDataCommand.scala:65)
at 
org.apache.spark.sql.execution.command.management.CarbonInsertIntoCommand.processData(CarbonInsertIntoCommand.scala:43)
at 
org.apache.spark.sql.execution.command.DataCommand.run(package.scala:71)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
at org.apache.spark.sql.Dataset.(Dataset.scala:182)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)

  was:
Steps -
>From beeline terminal a table is created with with table properties having 
>dictionary exclude or NO_INVERTED_INDEX- 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('DICTIONARY_EXCLUDE'='a1');   or 
create table test(a1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('NO_INVERTED_INDEX'='a1');

  From 3 concurrent beeline terminals the below sequence of insert into select 
and select queries are executed 120 times.
  insert into test select 2147483647;
  select * from test;
  select count(*) from test;
  select a1 from test;
  select 
round(a1),bround(a1),floor(a1),ceil(a1),rand(),exp(a1),ln(a1),log10(a1),log2(1),log(a1),pow(a1,a1),sqrt(a1),bin(a1),pmod(a1,a1),sin(a1),asin(a1),cos(a1),tan(a1),atan(a1),degrees(a1),radians(a1),positive(a1),negative(a1),sign(a1),factorial(a1),cbrt(a1)
 from test;

【Expected Output】:The insert into select query should be successful and the 
null pointer exception should not be thrown when concurrent load and select 
queries executed for table with dictionary exclude or NO_INVERTED_INDEX.

【Actual Output】:Null pointer exception thrown when concurrent load and select 
queries executed with table properties having dictionary exclude or 
NO_INVERTED_INDEX
  0: jdbc:hive2://10.18.98.136:23040> insert into test select 2147483647;
Error: java.lang.NullPointerException (state=,code=0)


> (Carbon1.3.0 - Spark 2.2) Null pointer exception thrown when concurrent load 
> and select queries executed for table with 

[GitHub] carbondata issue #1642: [CARBONDATA-1855] Added outputformat to carbon

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1642
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2225/



---


[GitHub] carbondata issue #1642: [CARBONDATA-1855] Added outputformat to carbon

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1642
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1880/



---


[GitHub] carbondata issue #1642: [CARBONDATA-1855] Added outputformat to carbon

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1642
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/650/



---


[GitHub] carbondata issue #1642: [CARBONDATA-1855] Added outputformat to carbon

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1642
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1879/



---


[GitHub] carbondata issue #1642: [CARBONDATA-1855] Added outputformat to carbon

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1642
  
Build Failed with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/649/



---


[GitHub] carbondata issue #1642: [CARBONDATA-1855] Added outputformat to carbon

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1642
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2224/



---


[GitHub] carbondata pull request #1642: [CARBONDATA-1855] Added outputformat to carbo...

2017-12-11 Thread ravipesala
GitHub user ravipesala opened a pull request:

https://github.com/apache/carbondata/pull/1642

[CARBONDATA-1855] Added outputformat to carbon

Support standard Hadoop outputformat interface for carbon. It will be 
helpful for integrations to execution engines like the spark, hive, and presto.
It should maintain segment management as well while writing the data to 
support incremental loading feature.
Be sure to do all of the following checklists to help us incorporate 
your contribution quickly and easily:

 - [X] Any interfaces changed?
 
 - [] Any backward compatibility impacted?
 
 - [X] Document update required?

 - [X] Testing done
   Tests added
   
 - [X] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ravipesala/incubator-carbondata 
carbon-outformat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1642.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1642


commit 1b149b4f0d67e54257915f7c8b2b467fd7a3b04a
Author: ravipesala 
Date:   2017-12-04T10:37:03Z

Added outputformat for carbon




---


[GitHub] carbondata issue #1634: [CARBONDATA-1585][Streaming] Describe formatted comm...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1634
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1878/



---


[GitHub] carbondata issue #1634: [CARBONDATA-1585][Streaming] Describe formatted comm...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1634
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/648/



---


[GitHub] carbondata issue #1634: [CARBONDATA-1585][Streaming] Describe formatted comm...

2017-12-11 Thread QiangCai
Github user QiangCai commented on the issue:

https://github.com/apache/carbondata/pull/1634
  
retest this please


---


[GitHub] carbondata issue #1641: [CARBONDATA-1882] select with group by and insertove...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1641
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2223/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
LGTM


---


[GitHub] carbondata pull request #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: D...

2017-12-11 Thread manishgupta88
Github user manishgupta88 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1630#discussion_r156134688
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/util/CarbonReflectionUtils.scala
 ---
@@ -175,6 +175,17 @@ object CarbonReflectionUtils {
 }
   }
 
+  def getDescribeTableFormattedField[T: TypeTag : reflect.ClassTag](obj: 
T): Boolean = {
+var isFormatted: Boolean = false
+val im = rm.reflect(obj)
+for (m <- typeOf[T].members.filter(!_.isMethod)) {
+  if (m.toString.contains("isFormatted")) {
+isFormatted = im.reflectField(m.asTerm).get.asInstanceOf[Boolean]
--- End diff --

Once the condition is satisfied, exit the loopAlso can you try to use 
find method here instead of using for loop..


---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests//



---


[GitHub] carbondata issue #1581: [CARBONDATA-1779] GenericVectorizedReader

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1581
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2221/



---


[GitHub] carbondata issue #1641: [CARBONDATA-1882] select with group by and insertove...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1641
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1877/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2220/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2219/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2218/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2217/



---


[GitHub] carbondata issue #1167: [CARBONDATA-1304] [IUD BuggFix] Iud with single pass

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1167
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2216/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1876/



---


[GitHub] carbondata issue #1581: [CARBONDATA-1779] GenericVectorizedReader

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1581
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1875/



---


[GitHub] carbondata issue #1641: select with group by and insertoverwrite to another ...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1641
  
Can one of the admins verify this patch?


---


[GitHub] carbondata pull request #1641: select with group by and insertoverwrite to a...

2017-12-11 Thread kushalsaha
GitHub user kushalsaha opened a pull request:

https://github.com/apache/carbondata/pull/1641

select with group by and insertoverwrite to another carbon table

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
No
 
 - [ ] Any backward compatibility impacted?
 No
 - [ ] Document update required?
No
 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   Yes
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kushalsaha/carbondata DTS_overwrite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1641.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1641


commit 06591387467b844340a832bb9ed1f5b76ac6b1e7
Author: kushalsaha 
Date:   2017-12-11T15:41:21Z

select with group by and insertoverwrite to another carbon table




---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
Build Failed with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/647/



---


[GitHub] carbondata issue #1581: [CARBONDATA-1779] GenericVectorizedReader

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1581
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/646/



---


[GitHub] carbondata issue #1601: [CARBONDATA-1787] Validation for table properties in...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1601
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2215/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1874/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/645/



---


[jira] [Created] (CARBONDATA-1882) select a table with 'group by' and perform insert overwrite to another carbon table it fails

2017-12-11 Thread Kushal Sah (JIRA)
Kushal Sah created CARBONDATA-1882:
--

 Summary: select a table with 'group by' and perform insert 
overwrite to another carbon table it fails
 Key: CARBONDATA-1882
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1882
 Project: CarbonData
  Issue Type: Bug
Reporter: Kushal Sah






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1616: [CARBONDATA-1851] Code refactored for better usablit...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1616
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2214/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1873/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/644/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
retest this please


---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
retest this please


---


[GitHub] carbondata issue #1622: [CARBONDATA-1865] Refactored code to skip single-pas...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1622
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2213/



---


[GitHub] carbondata issue #1627: [CARBONDATA-1759]make visibility of segments as fals...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1627
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2212/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2211/



---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/643/



---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1872/



---


[GitHub] carbondata issue #1632: [CARBONDATA-1839] [DataLoad]Fix bugs in compressing ...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1632
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2210/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1871/



---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
retest this please


---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
Build Failed with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/642/



---


[GitHub] carbondata issue #1633: [CARBONDATA-1878] [DataMap] Fix bugs in unsafe datam...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1633
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2209/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1870/



---


[GitHub] carbondata issue #1634: [CARBONDATA-1585][Streaming] Describe formatted comm...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1634
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2208/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/641/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1868/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1869/



---


[GitHub] carbondata issue #1640: [WIP] Annotate carbon property

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1640
  
Build Failed with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/640/



---


[GitHub] carbondata issue #1630: [CARBONDATA-1826] Carbon 1.3.0 - Spark 2.2: Describe...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1630
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/639/



---


[GitHub] carbondata issue #1637: [CARBONDATA-1876]clean all the InProgress segments f...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1637
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2207/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Retest this please


---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1867/



---


[GitHub] carbondata issue #1625: [CARBONDATA-1869] Null pointer exception thrown when...

2017-12-11 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1625
  
Build Success with Spark 2.2.0, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/638/



---


[GitHub] carbondata issue #1638: [CARBONDATA-1879][Streaming] Support alter table to ...

2017-12-11 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1638
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2206/



---


[jira] [Comment Edited] (CARBONDATA-1743) Carbon1.3.0-Pre-AggregateTable - Query returns no value if run at the time of pre-aggregate table creation

2017-12-11 Thread Ramakrishna S (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285692#comment-16285692
 ] 

Ramakrishna S edited comment on CARBONDATA-1743 at 12/11/17 9:54 AM:
-

Same behaviour found if pre-agg table is created along with the parent table 
load, NULL values will be inserted to pre-agg table.

0: jdbc:hive2://10.18.98.34:23040> select * from lineitem1_agr_line limit 2;
+---+-++--+
| lineitem1_l_shipdate  | lineitem1_l_returnflag  | lineitem1_l_partkey_count  |
+---+-++--+
| NULL  | NULL| NULL   |
| NULL  | NULL| NULL   |
+---+-+


was (Author: ram@huawei):
Same behaviour found if pre-agg table is created along with the parent table 
load, NULL values will be inserted to pre-agg table.

> Carbon1.3.0-Pre-AggregateTable - Query returns no value if run at the time of 
> pre-aggregate table creation
> --
>
> Key: CARBONDATA-1743
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1743
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: Test - 3 node ant cluster
>Reporter: Ramakrishna S
>Assignee: Kunal Kapoor
>  Labels: DFX
> Fix For: 1.3.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Steps:
> 1. Create table and load with large data
> create table if not exists lineitem4(L_SHIPDATE string,L_SHIPMODE 
> string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
> string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
> int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
> double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
> 'org.apache.carbondata.format' TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem4 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> 2. Create a pre-aggregate table 
> create datamap agr_lineitem4 ON TABLE lineitem4 USING 
> "org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem4 
> group by  L_RETURNFLAG, L_LINESTATUS;
> 3. Run aggregate query at the same time
>  select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem4 group by l_returnflag, l_linestatus;
> *+Expected:+*: aggregate query should fetch data either from main table or 
> pre-aggregate table.
> *+Actual:+* aggregate query does not return data until the pre-aggregate 
> table is created
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--+---+--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
> +---+---+--+---+--+
> +---+---+--+---+--+
> No rows selected (1.74 seconds)
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--+---+--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
> +---+---+--+---+--+
> +---+---+--+---+--+
> No rows selected (0.746 seconds)
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--++--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  |  sum(l_extendedprice)  |
> +---+---+--++--+
> | N | F | 2.9808092E7  | 4.471079473931997E10   |
> 

[jira] [Commented] (CARBONDATA-1743) Carbon1.3.0-Pre-AggregateTable - Query returns no value if run at the time of pre-aggregate table creation

2017-12-11 Thread Ramakrishna S (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285692#comment-16285692
 ] 

Ramakrishna S commented on CARBONDATA-1743:
---

Same behaviour found if pre-agg table is created along with the parent table 
load, NULL values will be inserted to pre-agg table.

> Carbon1.3.0-Pre-AggregateTable - Query returns no value if run at the time of 
> pre-aggregate table creation
> --
>
> Key: CARBONDATA-1743
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1743
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: Test - 3 node ant cluster
>Reporter: Ramakrishna S
>Assignee: Kunal Kapoor
>  Labels: DFX
> Fix For: 1.3.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Steps:
> 1. Create table and load with large data
> create table if not exists lineitem4(L_SHIPDATE string,L_SHIPMODE 
> string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
> string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
> int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
> double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
> 'org.apache.carbondata.format' TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem4 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> 2. Create a pre-aggregate table 
> create datamap agr_lineitem4 ON TABLE lineitem4 USING 
> "org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem4 
> group by  L_RETURNFLAG, L_LINESTATUS;
> 3. Run aggregate query at the same time
>  select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem4 group by l_returnflag, l_linestatus;
> *+Expected:+*: aggregate query should fetch data either from main table or 
> pre-aggregate table.
> *+Actual:+* aggregate query does not return data until the pre-aggregate 
> table is created
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--+---+--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
> +---+---+--+---+--+
> +---+---+--+---+--+
> No rows selected (1.74 seconds)
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--+---+--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
> +---+---+--+---+--+
> +---+---+--+---+--+
> No rows selected (0.746 seconds)
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--++--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  |  sum(l_extendedprice)  |
> +---+---+--++--+
> | N | F | 2.9808092E7  | 4.471079473931997E10   |
> | A | F | 1.145546488E9| 1.717580824169429E12   |
> | N | O | 2.31980219E9 | 3.4789002701143467E12  |
> | R | F | 1.146403932E9| 1.7190627928317903E12  |
> +---+---+--++--+
> 4 rows selected (0.8 seconds)
> 0: jdbc:hive2://10.18.98.48:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem4 
> group by l_returnflag, l_linestatus;
> +---+---+--++--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  |  sum(l_extendedprice)  |
> +---+---+--++--+
> | N  

[GitHub] carbondata pull request #1581: [CARBONDATA-1779] GenericVectorizedReader

2017-12-11 Thread bhavya411
Github user bhavya411 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1581#discussion_r156023011
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/collector/impl/RestructureBasedVectorResultCollector.java
 ---
@@ -238,7 +238,7 @@ private void fillDataForNonExistingMeasures() {
 (long) defaultValue);
   } else if (DataTypes.isDecimal(dataType)) {
 vector.putDecimals(columnVectorInfo.vectorOffset, 
columnVectorInfo.size,
-(Decimal) defaultValue, measure.getPrecision());
+((Decimal) defaultValue).toJavaBigDecimal(), 
measure.getPrecision());
--- End diff --

We can not remove these imports as the data that is  returned is stored in 
Spark Format , the casting is done to convert it to generic types


---


  1   2   >