[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1079
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/583/Failed Tests: 
134carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark:
 7org.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.read
 and write using CarbonContextorg.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.read
 and write using CarbonContext with compressionorg.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.test
 overwriteorg.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.read
 and write using CarbonContext, multiple loadorg.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.query
 using SQLContextorg.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.query
 using SQLContext without providing schemaorg.apache.carbondata.spark
 .testsuite.datacompaction.DataCompactionTest.check if compaction with 
Updatescarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 127org.apache.carbondata.integration.spark.testsuite.primitiveTypes.FloatDataTypeTestCase.select
 row whose rating is more than 2.8 from tfloatorg.apache.carbondata.integration.spark.testsuite.primitiveTypes.FloatDataTypeTestCase.select
 row whose rating is 3.5 from tfloatorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
  imei from Carbon_automation_test where contractNumber is NOT 
nullorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
  count(bomCode) from Carbon_automation_test where contractNumber is NOT 
nullorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
  channelsName from Carbon_automation_test where contractNumber is NOT 
nullorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
  channelsId from Carbon_automation_test where gamePointId is NOT 
nullorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
  channelsName from Carbon_automation_test where gamePointId is NOT 
nullorg.apache.carbondata.spark.testsuite.bigdecimal.T
 estBigDecimal.test filter query on big decimal columnorg.apache.carbondata.spark.testsuite.bigdecimal.TestBigInt.test
 big int data type storage for boundary valuesorg.apache.carbondata.spark.testsuite.bigdecimal.TestNullAndEmptyFields.test
 filter query on column is nullorg.apache.carbondata.spark.testsuite.bigdecimal.TestNullAndEmptyFields.test
 filter query on column is not nullorg.apache.carbondata.spark.testsuite.bigdecimal.TestNullAndEmptyFieldsUnsafe.test
 filter query on column is nullorg.apache.carbondata.spark.testsuite.bigdecimal.TestNullAndEmptyFieldsUnsafe.test
 filter query on column is not nullorg.apache.carbondata.spark.testsuite.dataload.TestGlobalSortDataLoad.LOAD
 with DELETEorg.apache.carbondata.spark.testsuite.dataload.TestGlobalSortDataLoad.LOAD
 with UPDATEorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.
 test load dataframe with saving compressed csv filesorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.test
 load dataframe with saving csv uncompressed filesorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.test
 load dataframe without saving csv filesorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.test
 load dataframe with integer columns included in the 
dictionaryorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.test
 load dataframe with string column excluded from the 
dictionaryorg.apache.ca
 rbondata.spark.testsuite.dataload.TestLoadDataFrame.test load dataframe with 
both dictionary include and exclude specifiedorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.test
 load dataframe with single pass enabledorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataFrame.test
 load dataframe with single pass disabledorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithDiffTimestampFormat.test
 load data with different timestamp formatorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxDefaultFormat.test
 carbon table data loading with special character 2org.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxDefaultFormat.test
 data which contain column with decimal data type in array of 
struct.org.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxUnsafe.test
 carbon table data loading with special character 2org.apache.c
 arbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxUnsafe.test data 
which contain column with decimal data type in array of 
struct.org.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxV1Format.test
 carbon table data loading with special character 

[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1079
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2663/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1079
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/91/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1079
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1079: [WIP]Measure Filter implementation

2017-06-22 Thread sounakr
GitHub user sounakr opened a pull request:

https://github.com/apache/carbondata/pull/1079

[WIP]Measure Filter implementation

Measure Filter Implementation

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sounakr/incubator-carbondata measure_filter

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1079.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1079


commit b3fa1780ae0e26fa379d812f9aec1c1c6274b8c6
Author: sounakr 
Date:   2017-06-20T17:22:36Z

Measure Filter implementation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1069: [WIP] Measure Filter implementation

2017-06-22 Thread sounakr
Github user sounakr closed the pull request at:

https://github.com/apache/carbondata/pull/1069


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1032: [CARBONDATA-1149] Fixed range info overlappin...

2017-06-22 Thread manishgupta88
Github user manishgupta88 closed the pull request at:

https://github.com/apache/carbondata/pull/1032


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1032: [CARBONDATA-1149] Fixed range info overlappin...

2017-06-22 Thread manishgupta88
GitHub user manishgupta88 reopened a pull request:

https://github.com/apache/carbondata/pull/1032

[CARBONDATA-1149] Fixed range info overlapping values issue

Fixed range info overlapping values issue. Added data type based validation 
for sorting range info values for checking if there are any overlapping values 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manishgupta88/incubator-carbondata 
rangeInfo_overlapping_validation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1032.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1032


commit 25fffbcb4536ef05e4165fba545c4947daef1f98
Author: manishgupta88 
Date:   2017-06-14T10:48:17Z

Fixed range partitioning overlapping values issue. Added datatype based 
validation for range partitioning overlapping values check




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (CARBONDATA-1203) insert data caused many duplicated data on spark 1.6.2

2017-06-22 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060325#comment-16060325
 ] 

chenerlu edited comment on CARBONDATA-1203 at 6/23/17 2:36 AM:
---

Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (\*) FROM t3")

Actual result:  t3 will have 20 records. (20 records = 10 + 10, the second '10' 
is because t3 has 10 records, if we change t3 to t4 which have 5 records, the 
result will be 15, so I think carbondata handle constant as '\*', not sure, 
this should be confirm).
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala] [~chenliang613]


was (Author: chenerlu):
Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (\*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala] [~chenliang613]

> insert data caused  many duplicated data on spark 1.6.2
> ---
>
> Key: CARBONDATA-1203
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1203
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do insert test on spark 1.6.2 in my local machine
> I try to  run the sql as below to insert a data
>   spark.sql(s"""
>  insert into $tableName select $id,'$date','$country','$testName'
>  ,'$phoneType','$serialname',$salary from $tableName
>  """).show()
> at last the data has been inserted successfully, but it inserted many 
> duplicated data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CARBONDATA-1203) insert data caused many duplicated data on spark 1.6.2

2017-06-22 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060325#comment-16060325
 ] 

chenerlu edited comment on CARBONDATA-1203 at 6/23/17 2:24 AM:
---

Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (\*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala] [~chenliang613]


was (Author: chenerlu):
Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (\*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala]

> insert data caused  many duplicated data on spark 1.6.2
> ---
>
> Key: CARBONDATA-1203
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1203
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do insert test on spark 1.6.2 in my local machine
> I try to  run the sql as below to insert a data
>   spark.sql(s"""
>  insert into $tableName select $id,'$date','$country','$testName'
>  ,'$phoneType','$serialname',$salary from $tableName
>  """).show()
> at last the data has been inserted successfully, but it inserted many 
> duplicated data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CARBONDATA-1203) insert data caused many duplicated data on spark 1.6.2

2017-06-22 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060325#comment-16060325
 ] 

chenerlu edited comment on CARBONDATA-1203 at 6/23/17 2:23 AM:
---

Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (\*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala]


was (Author: chenerlu):
Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala]

> insert data caused  many duplicated data on spark 1.6.2
> ---
>
> Key: CARBONDATA-1203
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1203
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do insert test on spark 1.6.2 in my local machine
> I try to  run the sql as below to insert a data
>   spark.sql(s"""
>  insert into $tableName select $id,'$date','$country','$testName'
>  ,'$phoneType','$serialname',$salary from $tableName
>  """).show()
> at last the data has been inserted successfully, but it inserted many 
> duplicated data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1203) insert data caused many duplicated data on spark 1.6.2

2017-06-22 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060325#comment-16060325
 ] 

chenerlu commented on CARBONDATA-1203:
--

Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count(*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala]

> insert data caused  many duplicated data on spark 1.6.2
> ---
>
> Key: CARBONDATA-1203
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1203
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do insert test on spark 1.6.2 in my local machine
> I try to  run the sql as below to insert a data
>   spark.sql(s"""
>  insert into $tableName select $id,'$date','$country','$testName'
>  ,'$phoneType','$serialname',$salary from $tableName
>  """).show()
> at last the data has been inserted successfully, but it inserted many 
> duplicated data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CARBONDATA-1203) insert data caused many duplicated data on spark 1.6.2

2017-06-22 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060325#comment-16060325
 ] 

chenerlu edited comment on CARBONDATA-1203 at 6/23/17 2:22 AM:
---

Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count (*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala]


was (Author: chenerlu):
Hi, I encounter same problem. Issue can be summarized as follows.
Step 1: create a carbon table.
  cc.sql("CREATE TABLE IF NOT EXISTS t3 (id Int, name String) STORED BY 
'carbondata'")

Step 2: load data, then t3 will have 10 records
   cc.sql("LOAD DATA LOCAL INPATH 'mypathofdata' INTO TABLE t3 ")

Step 3: insert constant into table t3
cc.sql("INSERT INTO TABLE t3 SELECT 1, 'jack' FROM t3")

Step4: count table t3
cc.sql("SELECT count(*) FROM t3")

Actual result:  t3 will have 20 records.
Expected result:  t3 should have 11 records or throw sql.AnalysisException 
(This will be same as Hive table I think)

Any idea about this issue, which solution is better ? 
[~ravi.pesala]

> insert data caused  many duplicated data on spark 1.6.2
> ---
>
> Key: CARBONDATA-1203
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1203
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do insert test on spark 1.6.2 in my local machine
> I try to  run the sql as below to insert a data
>   spark.sql(s"""
>  insert into $tableName select $id,'$date','$country','$testName'
>  ,'$phoneType','$serialname',$salary from $tableName
>  """).show()
> at last the data has been inserted successfully, but it inserted many 
> duplicated data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1032: [CARBONDATA-1149] Fixed range info overlapping value...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1032
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/581/Failed Tests: 
20carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 20org.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_smallIntorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_intorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_bigintorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_floatorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_doubleorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_decimalorg.apache.carbondata.spar
 
k.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_timestamporg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_dateorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_stringorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_varcharorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_charorg.apache.carbondata.spark.testsuite.partition.TestDDLForPartitionTable.create
 partition table: range partitionorg.apache.carbondata.spark.testsuite.partition.TestDDLForPartitionTable.create
 partition table: list partitionorg.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.data
 loading for partition table: range partitionorg.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.data
 loading for partition table: list partitionorg.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.single
 pass data loading for partition table: range partitionorg.apache.carbondata.spark.testsuite.partition.TestDa
 taLoadingForPartitionTable.single pass data loading for partition table: list 
partitionorg.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.Insert
 into for partition table: range partitionorg.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.Insert
 into partition table: list partitionorg.apache.carbondata.spark.testsuite.partition.TestQueryForPartitionTable.detail
 query on partition table: range partition



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1032: [CARBONDATA-1149] Fixed range info overlapping value...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1032
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2662/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1032: [CARBONDATA-1149] Fixed range info overlapping value...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1032
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/90/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1059: [CARBONDATA-1124] Use raw compression while encoding...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1059
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/580/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.dataretention.DataRetentionConcurrencyTestCase.DataRetention_Concurrency_load_date



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1059: [CARBONDATA-1124] Use raw compression while encoding...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1059
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/89/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1059: [CARBONDATA-1124] Use raw compression while encoding...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1059
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2661/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1065: [CARBONDATA-1196] Add 3 bytes data type suppo...

2017-06-22 Thread QiangCai
Github user QiangCai commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1065#discussion_r123543556
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/aggquery/IntegerDataTypeTestCase.scala
 ---
@@ -39,7 +43,80 @@ class IntegerDataTypeTestCase extends QueryTest with 
BeforeAndAfterAll {
   Seq(Row(11), Row(12), Row(13), Row(14), Row(15), Row(16), Row(17), 
Row(18), Row(19), Row(20)))
   }
 
+  test("short int table boundary test, safe column page") {
+sql(
+  """
+| DROP TABLE IF EXISTS short_int_table
+  """.stripMargin)
+// value column is less than short int, value2 column is bigger than 
short int
+sql(
+  """
+| CREATE TABLE short_int_table
+| (value int, value2 int, name string)
+| STORED BY 'org.apache.carbondata.format'
+  """.stripMargin)
+sql(
+  s"""
+| LOAD DATA LOCAL INPATH '$resourcesPath/shortintboundary.csv'
+| INTO TABLE short_int_table
+  """.stripMargin)
+checkAnswer(
+  sql("select value from short_int_table"),
+  Seq(Row(0), Row(127), Row(128), Row(-127), Row(-128), Row(32767), 
Row(-32767), Row(32768), Row(-32768), Row(65535),
+Row(-65535), Row(8388606), Row(-8388606), Row(8388607), 
Row(-8388607), Row(0), Row(0), Row(0), Row(0))
+)
+checkAnswer(
+  sql("select value2 from short_int_table"),
+  Seq(Row(0), Row(0), Row(0), Row(0), Row(0), Row(0), Row(0), Row(0), 
Row(0), Row(0),
+Row(0), Row(0), Row(0), Row(0), Row(0), Row(8388608), 
Row(-8388608), Row(8388609), Row(-8388609))
+)
+sql(
+  """
+| DROP TABLE short_int_table
+  """.stripMargin)
+  }
+
+  test("short int table boundary test, unsafe column page") {
+CarbonProperties.getInstance().addProperty(
+  CarbonCommonConstants.ENABLE_UNSAFE_COLUMN_PAGE_LOADING, "true"
+)
+sql(
+  """
+| DROP TABLE IF EXISTS short_int_table
+  """.stripMargin)
+// value column is less than short int, value2 column is bigger than 
short int
+sql(
+  """
+| CREATE TABLE short_int_table
+| (value int, value2 int, name string)
+| STORED BY 'org.apache.carbondata.format'
+  """.stripMargin)
+sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$resourcesPath/shortintboundary.csv'
+ | INTO TABLE short_int_table
+  """.stripMargin)
+checkAnswer(
+  sql("select value from short_int_table"),
+  Seq(Row(0), Row(127), Row(128), Row(-127), Row(-128), Row(32767), 
Row(-32767), Row(32768), Row(-32768), Row(65535),
+Row(-65535), Row(8388606), Row(-8388606), Row(8388607), 
Row(-8388607), Row(0), Row(0), Row(0), Row(0))
+)
+checkAnswer(
+  sql("select value2 from short_int_table"),
+  Seq(Row(0), Row(0), Row(0), Row(0), Row(0), Row(0), Row(0), Row(0), 
Row(0), Row(0),
+Row(0), Row(0), Row(0), Row(0), Row(0), Row(8388608), 
Row(-8388608), Row(8388609), Row(-8388609))
+)
+sql(
+  """
+| DROP TABLE short_int_table
+  """.stripMargin)
+CarbonProperties.getInstance().addProperty(
--- End diff --

please move this code to afterAll function


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1032: [CARBONDATA-1149] Fixed range info overlapping value...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1032
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2660/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1078
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/577/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1078
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1078: [CARBONDATA-1214]changing the delete syntax a...

2017-06-22 Thread ravikiran23
Github user ravikiran23 closed the pull request at:

https://github.com/apache/carbondata/pull/1078


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1216) Dataloading is failing with enable.unsafe.columnpage=true

2017-06-22 Thread kumar vishal (JIRA)
kumar vishal created CARBONDATA-1216:


 Summary: Dataloading is failing with enable.unsafe.columnpage=true
 Key: CARBONDATA-1216
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1216
 Project: CarbonData
  Issue Type: Bug
Reporter: kumar vishal
Assignee: Jacky Li


Dataloading is failing with enable.unsafe.columnpage=true



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1215) Select Query fails for decimal type with enable.unsafe.columnpage=true

2017-06-22 Thread kumar vishal (JIRA)
kumar vishal created CARBONDATA-1215:


 Summary: Select Query fails for decimal type with 
enable.unsafe.columnpage=true
 Key: CARBONDATA-1215
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1215
 Project: CarbonData
  Issue Type: Bug
Reporter: kumar vishal
Assignee: Jacky Li


Select Query fails for decimal type with enable.unsafe.columnpage=true



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1078
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/576/Failed Tests: 
2carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 2org.apache.carbondata.core.writer.CarbonFooterWriterTest.testWriteFactMetadataorg.apache.carbondata.core.writer.CarbonFooterWriterTes
 t.testReadFactMetadata



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1078
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/88/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1078
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2659/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1077: [CARBONDATA-1213] Removed rowCountPercentage check a...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1077
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/575/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1078
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1078: [CARBONDATA-1214]changing the delete syntax a...

2017-06-22 Thread ravikiran23
GitHub user ravikiran23 opened a pull request:

https://github.com/apache/carbondata/pull/1078

[CARBONDATA-1214]changing the delete syntax as in the hive.

Problem  : The syntax of the carbon delete by id and date is not compatible 
to hive.

Solution  : Making the delete syntax compatible to hive.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ravikiran23/incubator-carbondata 
syntax-change-delete

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1078.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1078


commit d0c271a0e07c6310f7775b9ea8817775da1607a4
Author: joobisb 
Date:   2017-06-22T12:48:19Z

changing the delete syntax as in the hive.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1214) Change the syntax of the Delete by ID and date as per hive syntax.

2017-06-22 Thread ravikiran (JIRA)
ravikiran created CARBONDATA-1214:
-

 Summary: Change the syntax of the Delete by ID and date as per 
hive syntax.
 Key: CARBONDATA-1214
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1214
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.1.0
Reporter: ravikiran
Priority: Minor


change the syntax of delete by id and date as per hive syntax.

Ex : delete from table carbon where segment.id in (0)

delete from table ignoremajor where segment.starttime  before '2099-07-28 
11:00:00'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1077: [CARBONDATA-1213] Removed rowCountPercentage check a...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1077
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2658/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1033: spark2/CarbonSQLCLIDriver.scala storePath is not hdf...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1033
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/574/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1033: spark2/CarbonSQLCLIDriver.scala storePath is not hdf...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1033
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/573/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1149) Fix issue of mismatch type of partition column when specify partition info and range info overlapping values issue

2017-06-22 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1149.
--
   Resolution: Fixed
 Assignee: chenerlu
Fix Version/s: 1.2.0

> Fix issue of mismatch type of partition column when specify partition info 
> and range info overlapping values issue
> --
>
> Key: CARBONDATA-1149
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1149
> Project: CarbonData
>  Issue Type: Bug
>Reporter: chenerlu
>Assignee: chenerlu
>Priority: Minor
> Fix For: 1.2.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1046: [carbondata-1149] Fix issue of mismatch type ...

2017-06-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1046


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1046: [carbondata-1149] Fix issue of mismatch type of part...

2017-06-22 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1046
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123464378
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowCarbonPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Value", StringType, nullable = true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = CarbonEnv.get.carbonMetastore
+  .lookupRelation1(tableIdentifier)(sqlContext).
+  asInstanceOf[CarbonRelation]
+val carbonTable = relation.tableMeta.carbonTable
+var partitionInfo = carbonTable.getPartitionInfo(
--- End diff --

throw an exception if the table is not partitioned


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123465927
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/ShowCarbonPartitionsCommand.scala
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hive.CarbonRelation
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Name", StringType, nullable = false,
--- End diff --

we don't need 3 columns.
Hive shows just one column and i think we should follow the same.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123463537
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/ShowPartitionInfoExample.scala
 ---
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.spark.sql.SparkSession
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object ShowPartitionInfoExample {
--- End diff --

No need to add a new example. Write one example for "show partition" in 
CarbonPartitionExample


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123466148
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/ShowCarbonPartitionsCommand.scala
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hive.CarbonRelation
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Name", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
name").build())(),
+AttributeReference("Value", StringType, nullable = true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sparkSession: SparkSession): Seq[Row] = {
--- End diff --

This code is almost same for 1.6 and 2.1. Can you move the common code to 
CommonUtil


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123466992
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowCarbonPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Value", StringType, nullable = true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = CarbonEnv.get.carbonMetastore
+  .lookupRelation1(tableIdentifier)(sqlContext).
+  asInstanceOf[CarbonRelation]
+val carbonTable = relation.tableMeta.carbonTable
+var partitionInfo = carbonTable.getPartitionInfo(
+  
carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableName)
+var partitionType = partitionInfo.getPartitionType
+var result = Seq.newBuilder[Row]
+columnName = partitionInfo.getColumnSchemaList.get(0).getColumnName
+LOGGER.info("partition column name:" + columnName)
+partitionType match {
+  case PartitionType.RANGE =>
+result.+=(RowFactory.create("0", "default"))
+var id = 1
+var rangeInfo = partitionInfo.getRangeInfo
+var size = rangeInfo.size() - 1
+for (index <- 0 to size) {
+  result.+=(RowFactory.create(id.toString(), "< " + 
rangeInfo.get(index)))
+  id += 1
+}
+  case PartitionType.RANGE_INTERVAL =>
+result.+=(RowFactory.create("", ""))
+  case PartitionType.LIST =>
+result.+=(RowFactory.create("0", "default"))
--- End diff --

for list:
column_name = 1,3,5,7,10


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123463416
  
--- Diff: 
examples/spark/src/main/scala/org/apache/carbondata/examples/ShowPartitionInfoExample.scala
 ---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import scala.collection.mutable.LinkedHashMap
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.examples.util.ExampleUtils
+
+object ShowPartitionInfoExample {
--- End diff --

No need to add a new example. Write one example for "show partition" in 
CarbonPartitionExample


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123465686
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonAnalysisRules.scala
 ---
@@ -170,6 +171,7 @@ object CarbonIUDAnalysisRule extends Rule[LogicalPlan] {
 logicalplan transform {
   case UpdateTable(t, cols, sel, where) => processUpdateQuery(t, cols, 
sel, where)
   case DeleteRecords(statement, table) => 
processDeleteRecordsQuery(statement, table)
+  case ShowPartitions(t) => ShowCarbonPartitionsCommand(t)
--- End diff --

This should not be here, handle this in DDLStrategy. 
Spark2 already parses the "show partition" command for us so we can match 
the same in DDLStrategy.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123466861
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowCarbonPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Value", StringType, nullable = true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = CarbonEnv.get.carbonMetastore
+  .lookupRelation1(tableIdentifier)(sqlContext).
+  asInstanceOf[CarbonRelation]
+val carbonTable = relation.tableMeta.carbonTable
+var partitionInfo = carbonTable.getPartitionInfo(
+  
carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableName)
+var partitionType = partitionInfo.getPartitionType
+var result = Seq.newBuilder[Row]
+columnName = partitionInfo.getColumnSchemaList.get(0).getColumnName
+LOGGER.info("partition column name:" + columnName)
+partitionType match {
+  case PartitionType.RANGE =>
+result.+=(RowFactory.create("0", "default"))
+var id = 1
+var rangeInfo = partitionInfo.getRangeInfo
+var size = rangeInfo.size() - 1
+for (index <- 0 to size) {
+  result.+=(RowFactory.create(id.toString(), "< " + 
rangeInfo.get(index)))
--- End diff --

can we have the partitions be shown like this for range:
1<= column_name < 5
5<= column_name < 10



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123467182
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowCarbonPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Value", StringType, nullable = true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = CarbonEnv.get.carbonMetastore
+  .lookupRelation1(tableIdentifier)(sqlContext).
+  asInstanceOf[CarbonRelation]
+val carbonTable = relation.tableMeta.carbonTable
+var partitionInfo = carbonTable.getPartitionInfo(
+  
carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableName)
+var partitionType = partitionInfo.getPartitionType
+var result = Seq.newBuilder[Row]
+columnName = partitionInfo.getColumnSchemaList.get(0).getColumnName
+LOGGER.info("partition column name:" + columnName)
+partitionType match {
+  case PartitionType.RANGE =>
+result.+=(RowFactory.create("0", "default"))
+var id = 1
+var rangeInfo = partitionInfo.getRangeInfo
+var size = rangeInfo.size() - 1
+for (index <- 0 to size) {
+  result.+=(RowFactory.create(id.toString(), "< " + 
rangeInfo.get(index)))
+  id += 1
+}
+  case PartitionType.RANGE_INTERVAL =>
+result.+=(RowFactory.create("", ""))
+  case PartitionType.LIST =>
+result.+=(RowFactory.create("0", "default"))
+var id = 1
+var listInfo = partitionInfo.getListInfo
+var size = listInfo.size() - 1
+for (index <- 0 to size) {
+  var listStr = ""
+  listInfo.get(index).toArray().foreach { x =>
+if (listStr.isEmpty()) {
+  listStr = x.toString()
+} else {
+  listStr += ", " + x.toString()
+}
+  }
+  result.+=(RowFactory.create(id.toString(), listStr))
+  id += 1
+}
+  case PartitionType.HASH =>
+var hashNumber = partitionInfo.getNumPartitions
+result.+=(RowFactory.create("HASH PARTITION", 
hashNumber.toString()))
--- End diff --

for hash:
column_name = HASH_NUMBER(num)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123466013
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonAnalysisRules.scala
 ---
@@ -203,6 +205,7 @@ case class CarbonIUDAnalysisRule(sparkSession: 
SparkSession) extends Rule[Logica
 logicalplan transform {
   case UpdateTable(t, cols, sel, where) => processUpdateQuery(t, cols, 
sel, where)
   case DeleteRecords(statement, table) => 
processDeleteRecordsQuery(statement, table)
+  case ShowPartitionsCommand(t, cols) => ShowCarbonPartitionsCommand(t)
--- End diff --

Not to be handled here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-22 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r123463948
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
 ---
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.testsuite.partition
+
+import java.sql.Timestamp
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.common.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+class TestShowPartition  extends QueryTest with BeforeAndAfterAll {
+  override def beforeAll = {
+
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"dd-MM-")
+
+  }
+
+  test("show partition table: hash table") {
+sql(
+  """
+| CREATE TABLE hashTable (empname String, designation String, doj 
Timestamp,
--- End diff --

1. Write the create statement in beforeAll()
2. 1 or 2 columns are enough
3. no need to load data


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1070: [CARBONDATA-1204] Fixed issue of more records...

2017-06-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1070


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1077: [CARBONDATA-1213] Removed rowCountPercentage check a...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1077
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/572/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.test
 for duplicate column name in the Fileheader options in load 
command



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1077: [CARBONDATA-1213] Removed rowCountPercentage check a...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1077
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/86/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1077: [CARBONDATA-1213] Removed rowCountPercentage check a...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1077
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2657/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1213) Removed rowCountPercentage check and fixed IUD data load issue

2017-06-22 Thread Manish Gupta (JIRA)
Manish Gupta created CARBONDATA-1213:


 Summary: Removed rowCountPercentage check and fixed IUD data load 
issue
 Key: CARBONDATA-1213
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1213
 Project: CarbonData
  Issue Type: Bug
Reporter: Manish Gupta
Assignee: Manish Gupta
 Fix For: 1.2.0


Problems:
1. Row count percentage not required with high cardinality threshold check
2. IUD returning incorrect results in case of update on high cardinality column

Analysis:
1. In case a column is identified as high cardinality column still it is not 
getting converted to no dictionary column because of another parameter check 
called rowCountPercentage. Default value of rowCountPercentage is 80%. Due to 
this even though high cardinality column is identified, if it is less than 80% 
of the total number of rows it will be treated as dictionary column. This can 
still lead to executor lost failure due to memory constraints.
2. RLE on a column is not being set correctly and due to incorrect code design 
RLE applicable on a column is decided by a different part of code from the one 
which is actually applying the RLE on a column. Because of this Footer is 
getting filled with incorrect RLE information and query is failing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1077: [CARBONDATA-1213] Removed rowCountPercentage check a...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1077
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1076: [WIP] Implement range interval partition

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1076
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/85/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1076: [WIP] Implement range interval partition

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1076
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2656/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1070: [CARBONDATA-1204] Fixed issue of more records...

2017-06-22 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1070#discussion_r123459408
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/InsertIntoCarbonTableTestCase.scala
 ---
@@ -196,19 +196,19 @@ class InsertIntoCarbonTableTestCase extends QueryTest 
with BeforeAndAfterAll {
  )  
  
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
 timeStampPropOrig)
   }
-
-  test("insert into carbon table from carbon table union query") {
-sql("drop table if exists loadtable")
-sql("drop table if exists insertTable")
-sql("create table loadtable (imei string,deviceInformationId int,MAC 
string,deviceColor string,device_backColor string,modelId string,marketName 
string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series 
string,productionDate timestamp,bomCode string,internalModels string, 
deliveryTime string, channelsId string, channelsName string , deliveryAreaId 
string, deliveryCountry string, deliveryProvince string, deliveryCity 
string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, 
ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, 
ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet 
string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion 
string, Active_operaSysVersion string, Active_BacVerNumber string, 
Active_BacFlashVer string, Active_webUIVersion string, Active_webUITypeCarrVer 
string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
Active_phonePADPartitionedVer
 sions string, Latest_YEAR int, Latest_MONTH int, Latest_DAY Decimal(30,10), 
Latest_HOUR string, Latest_areaId string, Latest_country string, 
Latest_province string, Latest_city string, Latest_district string, 
Latest_street string, Latest_releaseId string, Latest_EMUIVersion string, 
Latest_operaSysVersion string, Latest_BacVerNumber string, Latest_BacFlashVer 
string, Latest_webUIVersion string, Latest_webUITypeCarrVer string, 
Latest_webTypeDataVerNumber string, Latest_operatorsVersion string, 
Latest_phonePADPartitionedVersions string, Latest_operatorId string, 
gamePointDescription string,gamePointId double,contractNumber BigInt) STORED BY 
'org.apache.carbondata.format'")
-sql("LOAD DATA INPATH '" + resourcesPath + "/100_olap.csv' INTO table 
loadtable options ('DELIMITER'=',', 'QUOTECHAR'='\', 
'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVer
 
Number,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointDescription,gamePointId,contractNumber')")
-sql("create table insertTable (imei string,deviceInformationId int,MAC 
string,deviceColor string,device_backColor string,modelId string,marketName 
string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series 
string,productionDate timestamp,bomCode string,internalModels string, 
deliveryTime string, channelsId string, channelsName string , deliveryAreaId 
string, deliveryCountry string, deliveryProvince string, deliveryCity 
string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, 
ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, 
ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet 
string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion 
string, Active_operaSysVersion string, Active_BacVerNumber string, 
Active_BacFlashVer string, Active_webUIVersion string, Active_webUITypeCarrVer 
string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
Active_phonePADPartitionedV
 ersions string, Latest_YEAR int, Latest_MONTH int, Latest_DAY Decimal(30,10), 
Latest_HOUR string, Latest_areaId string, Latest_country string, 
Latest_province string, Latest_city string, Latest_district string, 
Latest_street string, Latest_releaseId string, Latest_EMUIVersion string, 
Latest_operaSysVersion string, Latest_BacVerNumber 

[GitHub] carbondata pull request #1076: [WIP] Implement range interval partition

2017-06-22 Thread chenerlu
GitHub user chenerlu opened a pull request:

https://github.com/apache/carbondata/pull/1076

[WIP] Implement range interval partition 

This PR is try to implement range interval partition and now work on 
process.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenerlu/incubator-carbondata RangeInterval

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1076.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1076


commit 016377b614a29b65e67a5a965ac2ebaedb86dfe6
Author: chenerlu 
Date:   2017-06-22T09:13:05Z

Step 1 of implement range interval partition type




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1040: [CARBONDATA-1171] Added support for show partitions ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1040
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2655/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1040: [CARBONDATA-1171] Added support for show partitions ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1040
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/84/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1040: [CARBONDATA-1171] Added support for show partitions ...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1040
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/570/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1053
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/569/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.dataretention.DataRetentionConcurrencyTestCase.DataRetention_Concurrency_load_date



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1053
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/83/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1053
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2654/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1070
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/568/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2653/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/82/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1066: [CARBONDATA-1197] Update related docs which s...

2017-06-22 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1066#discussion_r123440083
  
--- Diff: docs/installation-guide.md ---
@@ -150,7 +150,7 @@ $SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR 

 
 | Parameter | Description | Example |
 
|-|---|---|
-| CARBON_ASSEMBLY_JAR | CarbonData assembly jar name present in the 
`$SPARK_HOME/carbonlib/` folder. | 
carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar |
+| CARBON_ASSEMBLY_JAR | CarbonData assembly jar name present in the 
`$SPARK_HOME/carbonlib/` folder. | 
carbondata_2.10-1.2.0-SNAPSHOT-shade-hadoop2.7.2.jar |
--- End diff --

OK, have updated this PR as what you said. @chenliang613 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/81/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2652/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1066: [CARBONDATA-1197] Update related docs which still us...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1066
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2651/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1066: [CARBONDATA-1197] Update related docs which still us...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1066
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/565/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2650/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #791: [CARBONDATA-920] Updated useful-tips-on-carbondata.md

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/791
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/563/Failed Tests: 
2carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 1org.apache.carbondata.core.dictionary.client.DictionaryClientTest.testClientcarbondata-pr-spark-1.6/org.apache.carbon
 data:carbondata-spark-common-test: 1org.apache.carbondata.spark.testsuite.dataload.TestBatchSortDataLoad.test
 batch sort load by passing option and compaction



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #892: [CARBONDATA - 1036] - Added Implementation for Flink ...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/892
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/562/Failed Tests: 
2carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 2org.apache.carbondata.spark.testsuite.dataload.TestBatchSortDataLoad.test
 batch sort load by passing option and compactionorg.apache.carbondata.spark.testsuite.dataload.TestBatchSortDataLoad.test
 batch sort load by passing option in one load and with out option in other 
load and then do compaction



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1070: [CARBONDATA-1204] Fixed issue of more records...

2017-06-22 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1070#discussion_r123434284
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/scanner/AbstractBlockletScanner.java
 ---
@@ -46,6 +48,10 @@
 
   private AbstractScannedResult emptyResult;
 
+  private static int NUMBER_OF_ROWS_PER_PAGE = 
Integer.parseInt(CarbonProperties.getInstance()
--- End diff --

Actually it is not supposed to be configurable but as other places it is 
getting from carbon properties I am also getting like this. I will make it 
final.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #809: [WIP] Configured prefetch in query scanner

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/809
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/561/Failed Tests: 
2carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 2org.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataGeneral.test
 data loading into table with Single Passorg.apache.carbondata.spark.testsuite.dataretention.DataRetentionConcurrencyTestCase.DataRetention_Concurrency_load_date



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #249: [CARBONDATA-329] constant final class changed to inte...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/249
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/564/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #892: [CARBONDATA - 1036] - Added Implementation for Flink ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/892
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #943: [CARBONDATA-1086]Added documentation for BATCH SORT S...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/943
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1014: [CARBONDATA-1150]updated configuration-parameters.md...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1014
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1064: [CARBONDATA-<1173>] Stream ingestion - write path fr...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1064
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1068: [CARBONDATA-1195] Closes table tag in configuration-...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1068
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause In Prest...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1062
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #809: [WIP] Configured prefetch in query scanner

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/809
  
test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1070: [CARBONDATA-1204] Fixed issue of more records...

2017-06-22 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1070#discussion_r123427655
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/scanner/AbstractBlockletScanner.java
 ---
@@ -46,6 +48,10 @@
 
   private AbstractScannedResult emptyResult;
 
+  private static int NUMBER_OF_ROWS_PER_PAGE = 
Integer.parseInt(CarbonProperties.getInstance()
--- End diff --

if declare as `static`, it can be changed dynamically after system is 
initialized


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1070
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/560/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1070: [CARBONDATA-1204] Fixed issue of more records after ...

2017-06-22 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1070
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2648/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---