[GitHub] carbondata pull request #3034: [CARBONDATA-3126]Correct some spell error in ...

2018-12-28 Thread tisonkong
Github user tisonkong commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3034#discussion_r244465150
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java ---
@@ -84,7 +84,7 @@
   private static final CarbonProperties CARBONPROPERTIESINSTANCE = new 
CarbonProperties();
 
   /**
-   * porpeties .
+   * porpeties
--- End diff --

Right, this spell error has corrected


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10330/



---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2282/



---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10331/



---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2281/



---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2076/



---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2077/



---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
@runzhliu  please correct the PR title.


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2280/



---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread tisonkong
Github user tisonkong commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
> LGTM.
> One small issue, for pr title, please add one "blank" after 
[CARBONDATA-3126]

No problem. I see some has blank and some not, i will add blank after 
brackets next time .


---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
LGTM.
One small issue, for pr title, please add one "blank" after 
[CARBONDATA-3126] 



---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
add to whitelist


---


[GitHub] carbondata issue #3030: [HOTFIX] Optimize the code style in csdk/sdk markdow...

2018-12-28 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/3030
  
LGTM


---


[GitHub] carbondata pull request #3036: [CARBONDATA-3208]Remove unused parameters and...

2018-12-28 Thread runzhliu
Github user runzhliu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3036#discussion_r244462598
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonGlobalDictionaryRDD.scala
 ---
@@ -483,12 +483,12 @@ class CarbonGlobalDictionaryGenerateRDD(
 }
 
 /**
- * Set column dictionry patition format
--- End diff --

OK


---


[GitHub] carbondata pull request #3036: [CARBONDATA-3208]Remove unused parameters and...

2018-12-28 Thread runzhliu
Github user runzhliu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3036#discussion_r244462534
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/PartitionFactory.scala
 ---
@@ -32,7 +32,7 @@ object PartitionFactory {
   case PartitionType.LIST => new ListPartitioner(partitionInfo)
   case PartitionType.RANGE => new RangePartitioner(partitionInfo)
   case partitionType =>
-throw new CarbonDataLoadingException(s"Unsupport partition type: 
$partitionType")
+throw new CarbonDataLoadingException(s"Unsupported partition type: 
$partitionType")
--- End diff --

Sure, I got another three.


---


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
add to whitelist


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10329/



---


[GitHub] carbondata pull request #3036: [CARBONDATA-3208]Remove unused parameters and...

2018-12-28 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3036#discussion_r244461434
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala
 ---
@@ -387,7 +382,7 @@ object GlobalDictionaryUtil {
* @param table carbon table identifier
* @param colName   user specified  column name for predefined dict
* @param colDictPath   column dictionary file path
-   * @param parentDimName parent dimenion for complex type
+   * @param parentDimName parent dimension for complex type
--- End diff --

Have you check it in the whole project?


---


[GitHub] carbondata pull request #3036: [CARBONDATA-3208]Remove unused parameters and...

2018-12-28 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3036#discussion_r244461274
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonGlobalDictionaryRDD.scala
 ---
@@ -483,12 +483,12 @@ class CarbonGlobalDictionaryGenerateRDD(
 }
 
 /**
- * Set column dictionry patition format
--- End diff --

Have you check it in the whole project? patition has 25+


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2075/



---


[GitHub] carbondata pull request #3036: [CARBONDATA-3208]Remove unused parameters and...

2018-12-28 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3036#discussion_r244461155
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/PartitionFactory.scala
 ---
@@ -32,7 +32,7 @@ object PartitionFactory {
   case PartitionType.LIST => new ListPartitioner(partitionInfo)
   case PartitionType.RANGE => new RangePartitioner(partitionInfo)
   case partitionType =>
-throw new CarbonDataLoadingException(s"Unsupport partition type: 
$partitionType")
+throw new CarbonDataLoadingException(s"Unsupported partition type: 
$partitionType")
--- End diff --

Can you check it in whole project, there are another two. 


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread qiuchenjian
Github user qiuchenjian commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
@xubo245 OK, i'll try


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
add to whitelist


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
@qiuchenjian I tried it before, but there are some limit, not easy to 
support it. If you have better way to support it, you can raise a PR to 
implement it.


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread qiuchenjian
Github user qiuchenjian commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
@xuchuanyin   @xubo245 @QiangCai  @jackylk  Can we enable checkstyle rule 
of avoiding unused imports,i think it's necessary to keep good code style and 
reduce the dependency of a class on the number of jar packages,  the rule may 
be is   


---


[GitHub] carbondata issue #3035: [CARBONDATA-3216] Fix some bugs in CSDK

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3035
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10328/



---


[GitHub] carbondata issue #3035: [CARBONDATA-3216] Fix some bugs in CSDK

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3035
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2279/



---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Can one of the admins verify this patch?


---


[GitHub] carbondata issue #3036: [CARBONDATA-3208]Remove unused parameters and import...

2018-12-28 Thread runzhliu
Github user runzhliu commented on the issue:

https://github.com/apache/carbondata/pull/3036
  
Hi @xubo245 , can you please take a look?


---


[GitHub] carbondata pull request #3036: [CARBONDATA-3208]Remove unused parameters and...

2018-12-28 Thread runzhliu
GitHub user runzhliu opened a pull request:

https://github.com/apache/carbondata/pull/3036

[CARBONDATA-3208]Remove unused parameters and imports from code

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [x] Any interfaces changed?
 
 - [x] Any backward compatibility impacted?
 
 - [x] Document update required?

 - [x] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [x] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/runzhliu/carbondata runzhliu

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/3036.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3036


commit 4213886534fdf68a143060776afa47d269fac782
Author: Oscar 
Date:   2018-12-27T23:24:06Z

[CARBONDATA-3208]Remove unused parameters and imports from code




---


[GitHub] carbondata pull request #3030: [HOTFIX] Optimize the code style in csdk/sdk ...

2018-12-28 Thread lamber-ken
Github user lamber-ken commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3030#discussion_r29499
  
--- Diff: docs/csdk-guide.md ---
@@ -43,114 +43,116 @@ C++ SDK support read batch row. User can set batch by 
using withBatch(int batch)
 ## API List
 ### CarbonReader
 ```
-/**
- * create a CarbonReaderBuilder object for building carbonReader,
- * CarbonReaderBuilder object  can configure different parameter
- *
- * @param env JNIEnv
- * @param path data store path
- * @param tableName table name
- * @return CarbonReaderBuilder object
- */
-jobject builder(JNIEnv *env, char *path, char *tableName);
+/**
+ * create a CarbonReaderBuilder object for building carbonReader,
+ * CarbonReaderBuilder object  can configure different parameter
+ *
+ * @param env JNIEnv
+ * @param path data store path
+ * @param tableName table name
+ * @return CarbonReaderBuilder object
+ */
+jobject builder(JNIEnv *env, char *path, char *tableName);
 ```
 
 ```
-/**
- * create a CarbonReaderBuilder object for building carbonReader,
- * CarbonReaderBuilder object  can configure different parameter
- *
- * @param env JNIEnv
- * @param path data store path
- * */
-void builder(JNIEnv *env, char *path);
+/**
+ * create a CarbonReaderBuilder object for building carbonReader,
+ * CarbonReaderBuilder object  can configure different parameter
+ *
+ * @param env JNIEnv
+ * @param path data store path
+ * 
+ */
+void builder(JNIEnv *env, char *path);
 ```
 
 ```
-/**
- * Configure the projection column names of carbon reader
- *
- * @param argc argument counter
- * @param argv argument vector
- * @return CarbonReaderBuilder object
- */
-jobject projection(int argc, char *argv[]);
+/**
+ * Configure the projection column names of carbon reader
+ *
+ * @param argc argument counter
+ * @param argv argument vector
+ * @return CarbonReaderBuilder object
+ */
+jobject projection(int argc, char *argv[]);
 ```
 
 ```
-/**
- *  build carbon reader with argument vector
- *  it support multiple parameter
- *  like: key=value
- *  for example: fs.s3a.access.key=,  is user's access key 
value
- *
- * @param argc argument counter
- * @param argv argument vector
- * @return CarbonReaderBuilder object
- **/
-jobject withHadoopConf(int argc, char *argv[]);
+/**
+ * build carbon reader with argument vector
+ * it support multiple parameter
+ * like: key=value
+ * for example: fs.s3a.access.key=,  is user's access key value
+ *
+ * @param argc argument counter
+ * @param argv argument vector
+ * @return CarbonReaderBuilder object
+ *
+ */
+jobject withHadoopConf(int argc, char *argv[]);
 ```
 
 ```
-   /**
- * Sets the batch size of records to read
- *
- * @param batch batch size
- * @return CarbonReaderBuilder object
- */
-void withBatch(int batch);
+/**
+ * Sets the batch size of records to read
+ *
+ * @param batch batch size
+ * @return CarbonReaderBuilder object
+ */
+void withBatch(int batch);
 ```
 
 ```
-/**
- * Configure Row Record Reader for reading.
- */
-void withRowRecordReader();
+/**
+ * Configure Row Record Reader for reading.
+ */
+void withRowRecordReader();
 ```
 
 ```
-/**
- * build carbonReader object for reading data
- * it support read data from load disk
- *
- * @return carbonReader object
- */
-jobject build();
+/**
+ * build carbonReader object for reading data
+ * it support read data from load disk
+ *
+ * @return carbonReader object
+ */
+jobject build();
 ```
 
 ```
-/**
- * Whether it has next row data
- *
- * @return boolean value, if it has next row, return true. if it 
hasn't next row, return false.
- */
-jboolean hasNext();
+/**
+ * Whether it has next row data
+ *
+ * @return boolean value, if it has next row, return true. if it hasn't 
next row, return false.
+ */
+jboolean hasNext();
 ```
 
 ```
-/**
- * read next carbonRow from data
- * @return carbonRow object of one row
- */
- jobject readNextRow();
+/**
+ * read next carbonRow from data
+ * @return carbonRow object of one row
+ */
+jobject readNextRow();
 ```
 
 ```
-/**
- * read Next Batch 

[GitHub] carbondata issue #3035: [CARBONDATA-3216] Fix some bugs in CSDK

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3035
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2074/



---


[jira] [Resolved] (CARBONDATA-3194) Support Hive Metastore in Presto CarbonData.

2018-12-28 Thread Jacky Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Li resolved CARBONDATA-3194.
--
   Resolution: Fixed
Fix Version/s: 1.5.2

> Support Hive Metastore in Presto CarbonData.
> 
>
> Key: CARBONDATA-3194
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3194
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Ravindra Pesala
>Priority: Major
> Fix For: 1.5.2
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> Current Carbon Presto integration added a new presto connector that takes 
> the carbon store folder and lists the databases and tables from the folders. 
> In this implementation, we have many issues like. 
> 1. DB and table always need to be in specific order and name of the folders 
> should always match the DB name and table name. 
> 2. The table which is created in presto cannot be reflected directly in 
> other execution engines like Spark. 
> 3. DB with location and table with location cannot work. 
> 4. There will not be any access control on tables. 
> 5. There is no interoperability between hive tables like ORC or Parquet with 
> carbon. Like if we want to join some hive table with Carbon Table then it 
> won't be possible. 
> To overcome the above limitations we can support HiveMetastore in Presto 
> Carbon. Basically, instead of creating a new Presto Connector for Carbon, we 
> can extend the HiveConnector and override and add new 
> CarbonPageSourceFactory for reading the data and FileWriterFactory for 
> writing the data. So Carbon Table becomes one of the hive supported format 
> for Presto.  So whatever the tables added in spark can be reflected 
> immediately in Carbon and also the limitations mentioned above will be 
> solved with this type of implementation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-3216) There are some bugs in CSDK

2018-12-28 Thread xubo245 (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xubo245 updated CARBONDATA-3216:

Description: 
There are some bugs in CSDK:
 1.enableLocalDictionary can' t set false
code:

{code:java}
   writer.enableLocalDictionary(false);
{code}

excepton:
{code:java}
libc++abi.dylib: terminating with uncaught exception of type 
std::runtime_error: enableLocalDictionary parameter can't be NULL.
{code}

2.

  was:
There are some bugs in CSDK:
 1.enableLocalDictionary can' t set false


> There are some bugs in CSDK
> ---
>
> Key: CARBONDATA-3216
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3216
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.5.1
>Reporter: xubo245
>Assignee: xubo245
>Priority: Major
> Fix For: 1.5.2
>
>
> There are some bugs in CSDK:
>  1.enableLocalDictionary can' t set false
> code:
> {code:java}
>writer.enableLocalDictionary(false);
> {code}
> excepton:
> {code:java}
> libc++abi.dylib: terminating with uncaught exception of type 
> std::runtime_error: enableLocalDictionary parameter can't be NULL.
> {code}
> 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #3035: [CARBONDATA-3216] Fix some bugs in CSDK

2018-12-28 Thread xubo245
GitHub user xubo245 opened a pull request:

https://github.com/apache/carbondata/pull/3035

[CARBONDATA-3216] Fix some bugs in CSDK

1.enableLocalDictionary can' t set false

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xubo245/carbondata 
CARBONDATA-3216_FixBugsOfCSDK

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/3035.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3035


commit ca5999c6e629571d6fede92b1101e8c7cee762c7
Author: xubo245 
Date:   2018-12-29T03:34:41Z

[CARBONDATA-3216] Fix some bugs in CSDK
1.enableLocalDictionary can' t set false




---


[GitHub] carbondata pull request #3019: [CARBONDATA-3194] Integrating Carbon with Pre...

2018-12-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/3019


---


[GitHub] carbondata issue #3030: [HOTFIX] Optimize the code style in csdk/sdk markdow...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3030
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10327/



---


[GitHub] carbondata issue #3030: [HOTFIX] Optimize the code style in csdk/sdk markdow...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3030
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2278/



---


[jira] [Resolved] (CARBONDATA-3173) Add hive-guide and other guides to the root of the file ReadMe

2018-12-28 Thread beyond (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

beyond resolved CARBONDATA-3173.

Resolution: Fixed

It has merge the documents to the master

> Add hive-guide and other guides to the root of the file ReadMe
> --
>
> Key: CARBONDATA-3173
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3173
> Project: CarbonData
>  Issue Type: Improvement
>  Components: docs
>Affects Versions: NONE
>Reporter: beyond
>Assignee: beyond
>Priority: Minor
>  Labels: documentation
> Fix For: NONE
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In order to facilitate everyone's reading, we unified the integrated document 
> index into the ReadMe file in the root directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #3034: [CARBONDATA-3126]Correct some spell error in CarbonD...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3034
  
Can one of the admins verify this patch?


---


[GitHub] carbondata pull request #3034: [CARBONDATA-3126]Correct some spell error in ...

2018-12-28 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3034#discussion_r22392
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java ---
@@ -84,7 +84,7 @@
   private static final CarbonProperties CARBONPROPERTIESINSTANCE = new 
CarbonProperties();
 
   /**
-   * porpeties .
+   * porpeties
--- End diff --

porpeties should be Properties, can you describe it in detail?


---


[GitHub] carbondata pull request #3034: [CARBONDATA-3126]Correct some spell error in ...

2018-12-28 Thread tisonkong
GitHub user tisonkong opened a pull request:

https://github.com/apache/carbondata/pull/3034

[CARBONDATA-3126]Correct some spell error in CarbonData

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 some spell error,modified some local variable quantity.
 - [ ] Any backward compatibility impacted?
 No
 - [ ] Document update required?
No
 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tisonkong/carbondata master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/3034.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3034


commit fb986601f43e79c4ba7c6ad0b95e701b4e20b647
Author: tisonkong <360548494@...>
Date:   2018-12-29T02:02:27Z

[CARBONDATA-3126]Correct some spell error in CarbonData




---


[GitHub] carbondata pull request #3033: [CARBONDATA-3215] Optimize the documentation

2018-12-28 Thread qiuchenjian
Github user qiuchenjian commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3033#discussion_r21456
  
--- Diff: docs/datamap-developer-guide.md ---
@@ -3,15 +3,15 @@
 ### Introduction
 DataMap is a data structure that can be used to accelerate certain query 
of the table. Different DataMap can be implemented by developers. 
 Currently, there are two 2 types of DataMap supported:
--- End diff --

```suggestion
Currently, there are two types of DataMap supported:
```


---


[GitHub] carbondata issue #3019: [CARBONDATA-3194] Integrating Carbon with Presto usi...

2018-12-28 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/3019
  
LGTM


---


[jira] [Resolved] (CARBONDATA-2999) support read schema from S3

2018-12-28 Thread xubo245 (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xubo245 resolved CARBONDATA-2999.
-
Resolution: Resolved

> support read schema from S3
> ---
>
> Key: CARBONDATA-2999
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2999
> Project: CarbonData
>  Issue Type: Sub-task
>Affects Versions: 1.5.0
>Reporter: xubo245
>Assignee: xubo245
>Priority: Major
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> support read schema from S3
> Code:
> {code:java}
>   String path = "s3a://sdk/WriterOutput/carbondata5";
> if (args.length > 3) {
> path=args[3];
> }
> Schema schema = CarbonSchemaReader.readSchema(path);
> System.out.println(schema.getFieldsLength());
> {code}
> Exception:
> {code:java}
> WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your 
> platform... using builtin-java classes where applicable
> Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
> load AWS credentials from any provider in the chain
>   at 
> com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.(AbstractDFSCarbonFile.java:74)
>   at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.(AbstractDFSCarbonFile.java:66)
>   at 
> org.apache.carbondata.core.datastore.filesystem.HDFSCarbonFile.(HDFSCarbonFile.java:41)
>   at 
> org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.(S3CarbonFile.java:41)
>   at 
> org.apache.carbondata.core.datastore.impl.DefaultFileTypeProvider.getCarbonFile(DefaultFileTypeProvider.java:53)
>   at 
> org.apache.carbondata.core.datastore.impl.FileFactory.getCarbonFile(FileFactory.java:99)
>   at 
> org.apache.carbondata.sdk.file.CarbonSchemaReader.getCarbonFile(CarbonSchemaReader.java:79)
>   at 
> org.apache.carbondata.sdk.file.CarbonSchemaReader.readSchema(CarbonSchemaReader.java:150)
>   at 
> org.apache.carbondata.sdk.file.CarbonSchemaReader.readSchema(CarbonSchemaReader.java:109)
>   at 
> org.apache.carbondata.examples.sdk.SDKS3SchemaReadExample.main(SDKS3SchemaReadExample.java:51)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #3030: [HOTFIX] Optimize the code style in csdk/sdk markdow...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3030
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2073/



---


[jira] [Created] (CARBONDATA-3216) There are some bugs in CSDK

2018-12-28 Thread xubo245 (JIRA)
xubo245 created CARBONDATA-3216:
---

 Summary: There are some bugs in CSDK
 Key: CARBONDATA-3216
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3216
 Project: CarbonData
  Issue Type: Bug
Affects Versions: 1.5.1
Reporter: xubo245
Assignee: xubo245
 Fix For: 1.5.2


There are some bugs in CSDK:
 1.enableLocalDictionary can' t set false



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #3030: [HOTFIX] Optimize the code style in csdk/sdk markdow...

2018-12-28 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/3030
  
add to whitelist


---


[GitHub] carbondata pull request #3030: [HOTFIX] Optimize the code style in csdk/sdk ...

2018-12-28 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3030#discussion_r244439663
  
--- Diff: docs/csdk-guide.md ---
@@ -43,114 +43,116 @@ C++ SDK support read batch row. User can set batch by 
using withBatch(int batch)
 ## API List
 ### CarbonReader
 ```
-/**
- * create a CarbonReaderBuilder object for building carbonReader,
- * CarbonReaderBuilder object  can configure different parameter
- *
- * @param env JNIEnv
- * @param path data store path
- * @param tableName table name
- * @return CarbonReaderBuilder object
- */
-jobject builder(JNIEnv *env, char *path, char *tableName);
+/**
+ * create a CarbonReaderBuilder object for building carbonReader,
+ * CarbonReaderBuilder object  can configure different parameter
+ *
+ * @param env JNIEnv
+ * @param path data store path
+ * @param tableName table name
+ * @return CarbonReaderBuilder object
+ */
+jobject builder(JNIEnv *env, char *path, char *tableName);
 ```
 
 ```
-/**
- * create a CarbonReaderBuilder object for building carbonReader,
- * CarbonReaderBuilder object  can configure different parameter
- *
- * @param env JNIEnv
- * @param path data store path
- * */
-void builder(JNIEnv *env, char *path);
+/**
+ * create a CarbonReaderBuilder object for building carbonReader,
+ * CarbonReaderBuilder object  can configure different parameter
+ *
+ * @param env JNIEnv
+ * @param path data store path
+ * 
+ */
+void builder(JNIEnv *env, char *path);
 ```
 
 ```
-/**
- * Configure the projection column names of carbon reader
- *
- * @param argc argument counter
- * @param argv argument vector
- * @return CarbonReaderBuilder object
- */
-jobject projection(int argc, char *argv[]);
+/**
+ * Configure the projection column names of carbon reader
+ *
+ * @param argc argument counter
+ * @param argv argument vector
+ * @return CarbonReaderBuilder object
+ */
+jobject projection(int argc, char *argv[]);
 ```
 
 ```
-/**
- *  build carbon reader with argument vector
- *  it support multiple parameter
- *  like: key=value
- *  for example: fs.s3a.access.key=,  is user's access key 
value
- *
- * @param argc argument counter
- * @param argv argument vector
- * @return CarbonReaderBuilder object
- **/
-jobject withHadoopConf(int argc, char *argv[]);
+/**
+ * build carbon reader with argument vector
+ * it support multiple parameter
+ * like: key=value
+ * for example: fs.s3a.access.key=,  is user's access key value
+ *
+ * @param argc argument counter
+ * @param argv argument vector
+ * @return CarbonReaderBuilder object
+ *
+ */
+jobject withHadoopConf(int argc, char *argv[]);
 ```
 
 ```
-   /**
- * Sets the batch size of records to read
- *
- * @param batch batch size
- * @return CarbonReaderBuilder object
- */
-void withBatch(int batch);
+/**
+ * Sets the batch size of records to read
+ *
+ * @param batch batch size
+ * @return CarbonReaderBuilder object
+ */
+void withBatch(int batch);
 ```
 
 ```
-/**
- * Configure Row Record Reader for reading.
- */
-void withRowRecordReader();
+/**
+ * Configure Row Record Reader for reading.
+ */
+void withRowRecordReader();
 ```
 
 ```
-/**
- * build carbonReader object for reading data
- * it support read data from load disk
- *
- * @return carbonReader object
- */
-jobject build();
+/**
+ * build carbonReader object for reading data
+ * it support read data from load disk
+ *
+ * @return carbonReader object
+ */
+jobject build();
 ```
 
 ```
-/**
- * Whether it has next row data
- *
- * @return boolean value, if it has next row, return true. if it 
hasn't next row, return false.
- */
-jboolean hasNext();
+/**
+ * Whether it has next row data
+ *
+ * @return boolean value, if it has next row, return true. if it hasn't 
next row, return false.
+ */
+jboolean hasNext();
 ```
 
 ```
-/**
- * read next carbonRow from data
- * @return carbonRow object of one row
- */
- jobject readNextRow();
+/**
+ * read next carbonRow from data
+ * @return carbonRow object of one row
+ */
+jobject readNextRow();
 ```
 
 ```
-/**
- * read Next Batch Row

[GitHub] carbondata pull request #3030: [HOTFIX] Optimize the code style in csdk/sdk ...

2018-12-28 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3030#discussion_r244438958
  
--- Diff: docs/csdk-guide.md ---
@@ -172,361 +174,360 @@ release the memory and destroy JVM.
 ## API List
 ### CarbonWriter
 ```
-/**
- * create a CarbonWriterBuilder object for building carbonWriter,
- * CarbonWriterBuilder object  can configure different parameter
- *
- * @param env JNIEnv
- * @return CarbonWriterBuilder object
- */
-void builder(JNIEnv *env);
-```
-
-```
-/**
- * Sets the output path of the writer builder
- *
- * @param path is the absolute path where output files are written
- * This method must be called when building CarbonWriterBuilder
- * @return updated CarbonWriterBuilder
- */
-void outputPath(char *path);
-```
-
-```
-/**
-  * sets the list of columns that needs to be in sorted order
-  *
-  * @param argc argc argument counter, the number of projection column
-  * @param argv argv is a string array of columns that needs to be 
sorted.
-  *  If it is null or by default all dimensions are 
selected for sorting
-  *  If it is empty array, no columns are sorted
-  */
-void sortBy(int argc, char *argv[]);
-```
-
-```
-/**
- * configure the schema with json style schema
- *
- * @param jsonSchema json style schema
- * @return updated CarbonWriterBuilder
- */
-void withCsvInput(char *jsonSchema);
-```
-
-```
-/**
-* Updates the hadoop configuration with the given key value
-*
-* @param key key word
-* @param value value
-* @return CarbonWriterBuilder object
-*/
-void withHadoopConf(char *key, char *value);
-```
-
-```
- /**
- *  To support the table properties for writer
- *
- * @param key properties key
- * @param value properties value
- */
-void withTableProperty(char *key, char *value);
-```
-
-```
-/**
- * To support the load options for C++ sdk writer
- *
- * @param options key,value pair of load options.
- * supported keys values are
- * a. bad_records_logger_enable -- true (write into separate logs), 
false
- * b. bad_records_action -- FAIL, FORCE, IGNORE, REDIRECT
- * c. bad_record_path -- path
- * d. dateformat -- same as JAVA SimpleDateFormat
- * e. timestampformat -- same as JAVA SimpleDateFormat
- * f. complex_delimiter_level_1 -- value to Split the complexTypeData
- * g. complex_delimiter_level_2 -- value to Split the nested 
complexTypeData
- * h. quotechar
- * i. escapechar
- *
- * Default values are as follows.
- *
- * a. bad_records_logger_enable -- "false"
- * b. bad_records_action -- "FAIL"
- * c. bad_record_path -- ""
- * d. dateformat -- "" , uses from carbon.properties file
- * e. timestampformat -- "", uses from carbon.properties file
- * f. complex_delimiter_level_1 -- "$"
- * g. complex_delimiter_level_2 -- ":"
- * h. quotechar -- "\""
- * i. escapechar -- "\\"
- *
- * @return updated CarbonWriterBuilder
- */
-void withLoadOption(char *key, char *value);
+/**
+ * create a CarbonWriterBuilder object for building carbonWriter,
+ * CarbonWriterBuilder object  can configure different parameter
+ *
+ * @param env JNIEnv
+ * @return CarbonWriterBuilder object
+ */
+void builder(JNIEnv *env);
+```
+
+```
+/**
+ * Sets the output path of the writer builder
+ *
+ * @param path is the absolute path where output files are written
+ * This method must be called when building CarbonWriterBuilder
+ * @return updated CarbonWriterBuilder
+ */
+void outputPath(char *path);
+```
+
+```
+/**
+ * sets the list of columns that needs to be in sorted order
+ *
+ * @param argc argc argument counter, the number of projection column
+ * @param argv argv is a string array of columns that needs to be sorted.
+ *  If it is null or by default all dimensions are 
selected for sorting
+ *  If it is empty array, no columns are sorted
+ */
+void sortBy(int argc, char *argv[]);
+```
+
+```
+/**
+ * configure the schema with json style schema
+ *
+ * @param jsonSchema json style schema
+ * @return updated CarbonWriterBuilder
+ */
+void withCsvInput(char *jsonSchema);
+```
+
+```
+/**
+* Updates the hadoop configuration with 

[GitHub] carbondata issue #2161: [CARBONDATA-2218] AlluxioCarbonFile while trying to ...

2018-12-28 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2161
  
Thanks @chandrasaripaka 


---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10326/



---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2277/



---


[GitHub] carbondata issue #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/3029
  
@NamanRastogi You can merge HybridSorter into 
CompactionResultSortProcessor. For unsorted file it will be same flow, for 
sorted file you add one adapter(InMemorySortTempFileChunkHolder) on top of  
RawResultIterator which will be inline with SortTempFileChunkHolder class so 
Interface will be same.  And In SingleThreadFinalMerger expose one method which 
will take List of Sorted RawResultIterator and add to record holder 
heap(PrirotyQueue). 
InMemorySortTempFileChunkHolder you have to convert Object[] to 
IntermediateSortTempRow in getRowMethod


---


[GitHub] carbondata issue #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/3029
  
@NamanRastogi Lots of code is duplicated here, Please try to unify with 
other compactor processor to avoid the duplication.


---


[GitHub] carbondata pull request #2963: [CARBONDATA-3139] Fix bugs in MinMaxDataMap e...

2018-12-28 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2963#discussion_r244344781
  
--- Diff: 
datamap/example/src/main/java/org/apache/carbondata/datamap/minmax/MinMaxDataMapFactory.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.datamap.minmax;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import 
org.apache.carbondata.common.exceptions.sql.MalformedDataMapCommandException;
+import org.apache.carbondata.common.logging.LogServiceFactory;
+import org.apache.carbondata.core.cache.Cache;
+import org.apache.carbondata.core.cache.CacheProvider;
+import org.apache.carbondata.core.cache.CacheType;
+import org.apache.carbondata.core.datamap.DataMapDistributable;
+import org.apache.carbondata.core.datamap.DataMapLevel;
+import org.apache.carbondata.core.datamap.DataMapMeta;
+import org.apache.carbondata.core.datamap.DataMapStoreManager;
+import org.apache.carbondata.core.datamap.Segment;
+import org.apache.carbondata.core.datamap.TableDataMap;
+import org.apache.carbondata.core.datamap.dev.DataMapBuilder;
+import org.apache.carbondata.core.datamap.dev.DataMapWriter;
+import org.apache.carbondata.core.datamap.dev.cgdatamap.CoarseGrainDataMap;
+import 
org.apache.carbondata.core.datamap.dev.cgdatamap.CoarseGrainDataMapFactory;
+import org.apache.carbondata.core.datastore.block.SegmentProperties;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFile;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFileFilter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.features.TableOperation;
+import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
+import org.apache.carbondata.core.metadata.schema.table.DataMapSchema;
+import 
org.apache.carbondata.core.metadata.schema.table.column.CarbonColumn;
+import org.apache.carbondata.core.scan.filter.intf.ExpressionType;
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.path.CarbonTablePath;
+import org.apache.carbondata.events.Event;
+
+import org.apache.log4j.Logger;
+
+/**
+ * Min Max DataMap Factory
+ */
+@InterfaceAudience.Internal
+public class MinMaxDataMapFactory extends CoarseGrainDataMapFactory {
+  private static final Logger LOGGER =
+  
LogServiceFactory.getLogService(MinMaxDataMapFactory.class.getName());
+  private DataMapMeta dataMapMeta;
+  private String dataMapName;
+  // segmentId -> list of index files
+  private Map> segmentMap = new ConcurrentHashMap<>();
+  private Cache cache;
+
+  public MinMaxDataMapFactory(CarbonTable carbonTable, DataMapSchema 
dataMapSchema)
+  throws MalformedDataMapCommandException {
+super(carbonTable, dataMapSchema);
+
+// this is an example for datamap, we can choose the columns and 
operations that
+// will be supported by this datamap. Furthermore, we can add 
cache-support for this datamap.
+
+this.dataMapName = dataMapSchema.getDataMapName();
+List indexedColumns = 
carbonTable.getIndexedColumns(dataMapSchema);
+
+// operations that will be supported on the indexed columns
+List optOperations = new ArrayList<>();
+optOperations.add(ExpressionType.NOT);
+optOperations.add(ExpressionType.EQUALS);
+optOperations.add(ExpressionType.NOT_EQUALS);
+optOperations.add(ExpressionType.GREATERTHAN);
+optOperations.add(ExpressionType.GREATERTHAN_EQUALTO);
+optOperations.add(ExpressionType.LESSTHAN);
 

[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2072/



---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2274/



---


[GitHub] carbondata issue #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3029
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2276/



---


[GitHub] carbondata issue #3019: [CARBONDATA-3194] Integrating Carbon with Presto usi...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3019
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10324/



---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread akashrn5
Github user akashrn5 commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
retest this please


---


[GitHub] carbondata pull request #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3029#discussion_r244332001
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
 ---
@@ -166,8 +164,9 @@ class CarbonMergerRDD[K, V](
 carbonLoadModel.setTablePath(tablePath)
 // check for restructured block
 // TODO: only in case of add and drop this variable should be true
-val restructuredBlockExists: Boolean = CarbonCompactionUtil
-  .checkIfAnyRestructuredBlockExists(segmentMapping,
+val restructuredBlockExists: Boolean =
+  CarbonCompactionUtil.checkIfAnyRestructuredBlockExists(
--- End diff --

revert this change


---


[GitHub] carbondata issue #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/3029
  
@NamanRastogi Please add detail comment for all the changed code


---


[GitHub] carbondata pull request #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3029#discussion_r244331702
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
 ---
@@ -207,18 +212,34 @@ class CarbonMergerRDD[K, V](
 carbonMergerMapping.campactionType,
 factTableName,
 partitionSpec)
+
+} else if (CarbonCompactionUtil
+  .anyUnsortedOrRestructuredBlocks(rawResultIteratorList, 
rawResultIteratorBooleanMap)) {
+
+  LOGGER.info("HybridSortProcessor flow is selected")
+  processor = new HybridSortProcessor(
+carbonLoadModel,
+carbonTable,
+segmentProperties,
+carbonMergerMapping.campactionType,
+factTableName,
+partitionSpec,
+rawResultIteratorBooleanMap)
+
 } else {
+
   LOGGER.info("RowResultMergerProcessor flow is selected")
-  processor =
-new RowResultMergerProcessor(
-  databaseName,
-  factTableName,
-  segmentProperties,
-  tempStoreLoc,
-  carbonLoadModel,
-  carbonMergerMapping.campactionType,
-  partitionSpec)
+  processor = new RowResultMergerProcessor(
--- End diff --

revert this change


---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread akashrn5
Github user akashrn5 commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
retest this please


---


[GitHub] carbondata issue #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3029
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2071/



---


[GitHub] carbondata issue #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3029
  
Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10325/



---


[GitHub] carbondata issue #3019: [CARBONDATA-3194] Integrating Carbon with Presto usi...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3019
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2070/



---


[GitHub] carbondata issue #3019: [CARBONDATA-3194] Integrating Carbon with Presto usi...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3019
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2275/



---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10323/



---


[GitHub] carbondata issue #3033: [CARBONDATA-3215] Optimize the documentation

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3033
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10322/



---


[GitHub] carbondata issue #3033: [CARBONDATA-3215] Optimize the documentation

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3033
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2273/



---


[GitHub] carbondata pull request #3029: [CARBONDATA-3200] No-Sort compaction

2018-12-28 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3029#discussion_r244319031
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionExecutor.java
 ---
@@ -126,17 +128,24 @@ public CarbonCompactionExecutor(Map segmentMapping,
   // for each segment get taskblock info
   TaskBlockInfo taskBlockInfo = taskMap.getValue();
   Set taskBlockListMapping = taskBlockInfo.getTaskSet();
+  // Check if block needs sorting or not
+  boolean sortingRequired =
+  CarbonCompactionUtil.isRestructured(listMetadata, 
carbonTable.getTableLastUpdatedTime())
+  || !CarbonCompactionUtil.isSorted(taskBlockInfo);
--- End diff --

Here we are reading each and every carbondata file footer, it will impact 
the compaction performance. I feel we should discuss and consider adding isSort 
flag also to the carbonindex file to simplify it


---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2069/



---


[GitHub] carbondata issue #3033: [CARBONDATA-3215] Optimize the documentation

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3033
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2068/



---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2272/



---


[GitHub] carbondata pull request #3033: [CARBONDATA-3215] Optimize the documentation

2018-12-28 Thread xubo245
GitHub user xubo245 opened a pull request:

https://github.com/apache/carbondata/pull/3033

[CARBONDATA-3215] Optimize the documentation

When user use the Global dictionary, local dictionary,non-dictionary in 
the code, users maybe have some confusion. The same for mvdataMap and 
IndexDataMap. I describe and list it in this PR.
1. describe Global dictionary, local dictionary,non-dictionary together 
in doc
2. list mvdataMap and IndexDataMap

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 No
 - [ ] Any backward compatibility impacted?
 No
 - [ ] Document update required?
yes, only update the doc
 - [ ] Testing done
   No need
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
No


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xubo245/carbondata CARBONDATA-3215_OptimizeDoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/3033.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3033


commit 1eae260dcbddadb592357e0f71309d27960de6c0
Author: xubo245 
Date:   2018-12-28T12:23:13Z

[CARBONDATA-3215] Optimize the documentation
1. describe Global dictionary local dictionary,non-dictionary together in 
doc
2. list mvdataMap and IndexDataMap




---


[jira] [Resolved] (CARBONDATA-3203) Compaction failing for table which is retstructured

2018-12-28 Thread Manish Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manish Gupta resolved CARBONDATA-3203.
--
   Resolution: Fixed
Fix Version/s: 1.5.2

> Compaction failing for table which is retstructured
> ---
>
> Key: CARBONDATA-3203
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3203
> Project: CarbonData
>  Issue Type: Bug
>Reporter: MANISH NALLA
>Assignee: MANISH NALLA
>Priority: Minor
> Fix For: 1.5.2
>
>
> Steps to reproduce:
>  # Create table with complex and primitive types.
>  # Load data 2-3 times.
>  # Drop one column.
>  # Trigger Compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-3196) Compaction Failing for Complex datatypes with Dictionary Include

2018-12-28 Thread Manish Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manish Gupta resolved CARBONDATA-3196.
--
   Resolution: Fixed
Fix Version/s: 1.5.2

> Compaction Failing for Complex datatypes with Dictionary Include
> 
>
> Key: CARBONDATA-3196
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3196
> Project: CarbonData
>  Issue Type: Bug
>Reporter: MANISH NALLA
>Assignee: MANISH NALLA
>Priority: Minor
> Fix For: 1.5.2
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
>  # Create Table with Complex type and Dictionary Include Complex type.
>  # Load data into the table 2-3 times.
>  # Alter table compact 'major'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #3022: [CARBONDATA-3196] [CARBONDATA-3203]Fixed Comp...

2018-12-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/3022


---


[GitHub] carbondata issue #2971: [TEST] Test loading performance with range_column

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2971
  
Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10319/



---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10321/



---


[GitHub] carbondata pull request #3027: [CARBONDATA-3202]update the schema to session...

2018-12-28 Thread manishgupta88
Github user manishgupta88 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3027#discussion_r244306298
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
 ---
@@ -262,13 +263,28 @@ private[sql] case class 
CarbonAlterTableColRenameDataTypeChangeCommand(
   carbonTable: CarbonTable,
   tableInfo: TableInfo,
   addColumnSchema: ColumnSchema,
-  schemaEvolutionEntry: SchemaEvolutionEntry): Unit = {
+  schemaEvolutionEntry: SchemaEvolutionEntry,
+  oldCarbonColumn: CarbonColumn): Unit = {
 val schemaConverter = new ThriftWrapperSchemaConverterImpl
-val a = 
List(schemaConverter.fromExternalToWrapperColumnSchema(addColumnSchema))
-val (tableIdentifier, schemaParts, cols) = 
AlterTableUtil.updateSchemaInfo(
-  carbonTable, schemaEvolutionEntry, tableInfo, Some(a))(sparkSession)
+// get the carbon column in schema order
+val carbonColumns = 
carbonTable.getCreateOrderColumn(carbonTable.getTableName).asScala
+  .collect { case carbonColumn if !carbonColumn.isInvisible => 
carbonColumn.getColumnSchema }
+// get the schema ordinal of the column for which the datatype changed 
or column is renamed
+var schemaOrdinal: Int = 0
+carbonColumns.foreach { carbonColumn =>
+  if 
(carbonColumn.getColumnName.equalsIgnoreCase(oldCarbonColumn.getColName)) {
+schemaOrdinal = carbonColumns.indexOf(carbonColumn)
--- End diff --

Use filter function to achieve the required output


---


[GitHub] carbondata issue #2996: [WIP] Fix Rename-Fail & Datamap-creation-Fail

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/2996
  
@NamanRastogi Please fixed the build failure 


---


[GitHub] carbondata pull request #3019: [CARBONDATA-3194] Integrating Carbon with Pre...

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3019#discussion_r244104691
  
--- Diff: 
integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataPageSourceProvider.java
 ---
@@ -43,63 +43,78 @@
 
 import static org.apache.carbondata.presto.Types.checkType;
 
+import com.facebook.presto.hive.HdfsEnvironment;
+import com.facebook.presto.hive.HiveClientConfig;
+import com.facebook.presto.hive.HiveColumnHandle;
+import com.facebook.presto.hive.HivePageSourceFactory;
+import com.facebook.presto.hive.HivePageSourceProvider;
+import com.facebook.presto.hive.HiveRecordCursorProvider;
+import com.facebook.presto.hive.HiveSplit;
 import com.facebook.presto.spi.ColumnHandle;
 import com.facebook.presto.spi.ConnectorPageSource;
 import com.facebook.presto.spi.ConnectorSession;
 import com.facebook.presto.spi.ConnectorSplit;
-import com.facebook.presto.spi.connector.ConnectorPageSourceProvider;
+import com.facebook.presto.spi.SchemaTableName;
 import com.facebook.presto.spi.connector.ConnectorTransactionHandle;
+import com.facebook.presto.spi.type.TypeManager;
 import com.google.common.collect.ImmutableList;
 import com.google.inject.Inject;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.TaskAttemptContextImpl;
 import org.apache.hadoop.mapred.TaskAttemptID;
 import org.apache.hadoop.mapreduce.TaskType;
 
-import static com.google.common.base.Preconditions.checkArgument;
 import static com.google.common.base.Preconditions.checkNotNull;
 
-
 /**
  * Provider Class for Carbondata Page Source class.
  */
-public class CarbondataPageSourceProvider implements 
ConnectorPageSourceProvider {
+public class CarbondataPageSourceProvider extends HivePageSourceProvider {
 
-  private String connectorId;
   private CarbonTableReader carbonTableReader;
   private String queryId ;
-
-  @Inject public CarbondataPageSourceProvider(CarbondataConnectorId 
connectorId,
+  private HdfsEnvironment hdfsEnvironment;
+
+  @Inject public CarbondataPageSourceProvider(
+  HiveClientConfig hiveClientConfig,
+  HdfsEnvironment hdfsEnvironment,
+  Set cursorProviders,
+  Set pageSourceFactories,
+  TypeManager typeManager,
   CarbonTableReader carbonTableReader) {
-this.connectorId = requireNonNull(connectorId, "connectorId is 
null").toString();
+super(hiveClientConfig, hdfsEnvironment, cursorProviders, 
pageSourceFactories, typeManager);
 this.carbonTableReader = requireNonNull(carbonTableReader, 
"carbonTableReader is null");
+this.hdfsEnvironment = hdfsEnvironment;
   }
 
   @Override
   public ConnectorPageSource createPageSource(ConnectorTransactionHandle 
transactionHandle,
   ConnectorSession session, ConnectorSplit split, List 
columns) {
-this.queryId = ((CarbondataSplit)split).getQueryId();
+HiveSplit carbonSplit =
+checkType(split, HiveSplit.class, "split is not class HiveSplit");
+if (carbonSplit.getSchema().getProperty("queryId") == null) {
+  return super.createPageSource(transactionHandle, session, split, 
columns);
+}
+this.queryId = carbonSplit.getSchema().getProperty("queryId");
--- End diff --

Move this line above If condition and in if condition check if queryId is 
null


---


[jira] [Resolved] (CARBONDATA-3195) Added validation for inverted index

2018-12-28 Thread kumar vishal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kumar vishal resolved CARBONDATA-3195.
--
Resolution: Fixed
  Assignee: Shardul Singh

> Added validation for inverted index
> ---
>
> Key: CARBONDATA-3195
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3195
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Shardul Singh
>Assignee: Shardul Singh
>Priority: Minor
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #3022: [CARBONDATA-3196] [CARBONDATA-3203]Fixed Compaction ...

2018-12-28 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/3022
  
LGTM


---


[GitHub] carbondata pull request #3020: [CARBONDATA-3195]Added validation for Inverte...

2018-12-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/3020


---


[GitHub] carbondata issue #3020: [CARBONDATA-3195]Added validation for Inverted Index...

2018-12-28 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/3020
  
LGTM


---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2067/



---


[GitHub] carbondata issue #2971: [TEST] Test loading performance with range_column

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2971
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2269/



---


[GitHub] carbondata issue #3019: [CARBONDATA-3194] Integrating Carbon with Presto usi...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3019
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2268/



---


[GitHub] carbondata pull request #3032: [CARBONDATA-3210] merge getKeyOnPrefix into C...

2018-12-28 Thread qiuchenjian
Github user qiuchenjian commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3032#discussion_r244295612
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala
 ---
@@ -117,4 +116,18 @@ object CarbonSparkUtil {
 case _ =>
   delimiter
   }
+  def getKeyOnPrefix(path: String): (String, String, String) = {
--- End diff --

getKeyOnPrefix is the same as S3Example, why add the same method ,but not 
be called


---


[GitHub] carbondata pull request #3032: [CARBONDATA-3210] merge getKeyOnPrefix into C...

2018-12-28 Thread BeyondYourself
Github user BeyondYourself commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3032#discussion_r244293682
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala
 ---
@@ -117,4 +116,18 @@ object CarbonSparkUtil {
 case _ =>
   delimiter
   }
+  def getKeyOnPrefix(path: String): (String, String, String) = {
+val endPoint = "spark.hadoop." + ENDPOINT
+if (path.startsWith(CarbonCommonConstants.S3A_PREFIX)) {
+  ("spark.hadoop." + ACCESS_KEY, "spark.hadoop." + SECRET_KEY, 
endPoint)
--- End diff --

Duplicated  spark.hadoop."  literals make the process of refactoring 
error-prone, since you must be sure to update all occurrences.",I think you 
can  define a variable uniformly.


---


[GitHub] carbondata issue #3027: [CARBONDATA-3202]update the schema to session catalo...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3027
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2066/



---


[GitHub] carbondata issue #3019: [CARBONDATA-3194] Integrating Carbon with Presto usi...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3019
  
Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10316/



---


[GitHub] carbondata issue #2971: [TEST] Test loading performance with range_column

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2971
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2065/



---


[GitHub] carbondata pull request #3032: [CARBONDATA-3210] merge getKeyOnPrefix into C...

2018-12-28 Thread qiuchenjian
Github user qiuchenjian commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/3032#discussion_r244291200
  
--- Diff: README.md ---
@@ -84,3 +85,6 @@ To get involved in CarbonData:
 ## About
 Apache CarbonData is an open source project of The Apache Software 
Foundation (ASF).
 
+
+## 2018-12-28开始
--- End diff --

what the usage of this description?  why it has chinese?


---


[GitHub] carbondata issue #3022: [CARBONDATA-3196] [CARBONDATA-3203]Fixed Compaction ...

2018-12-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/3022
  
Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/10315/



---


[GitHub] carbondata issue #3032: [CARBONDATA-3210] merge getKeyOnPrefix into CarbonSp...

2018-12-28 Thread qiuchenjian
Github user qiuchenjian commented on the issue:

https://github.com/apache/carbondata/pull/3032
  
Please describe the change of this PR


---


  1   2   >