[GitHub] [carbondata] CarbonDataQA commented on issue #3211: [WIP] Support configuring Java version

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3211: [WIP] Support configuring Java version
URL: https://github.com/apache/carbondata/pull/3211#issuecomment-491165503
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11414/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3211: [WIP] Support configuring Java version

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3211: [WIP] Support configuring Java version
URL: https://github.com/apache/carbondata/pull/3211#issuecomment-491157695
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3348/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491157194
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3150/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491155230
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3347/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kumarvishal09 edited a comment on issue #3179: [CARBONDATA-3338] Support Incremental DataLoad for MV Datamap[with single parent table]

2019-05-09 Thread GitBox
kumarvishal09 edited a comment on issue #3179: [CARBONDATA-3338] Support 
Incremental DataLoad for MV Datamap[with single parent table]
URL: https://github.com/apache/carbondata/pull/3179#issuecomment-489635271
 
 
   
   
   
   
   @Indhumathi27 
   LGTM
   
   I have few optmization which we can consider in future pr's
   
   1. In case of below scenario we can avoid reloading the MV
   Maintable segments:0,1,2
   MV: 0 => 0,1,2
   Now after maintable compaction it is reload the 0.1 segment of maintable to 
MV, this we can avoid by changing the mapping {0,1,2}=>{0.1}
   
   2. Suppose I have mv select user, sum(column1)..from 
   and each segment of mv has same user and number of records is around 10M per 
segments in MV
   So if we are compacting 10 segments it will have 100M records, in this case 
we can fire self query on MV and aggregate the records , so after compaction it 
will reduce the records to 10M and it will improve the query performance.
   Some of the aggregator we have to change like count to sum during compaction
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] Indhumathi27 commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
Indhumathi27 commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491150074
 
 
   Retest this please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3211: [WIP] Support configuring Java version

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3211: [WIP] Support configuring Java version
URL: https://github.com/apache/carbondata/pull/3211#issuecomment-491148333
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3149/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491148148
 
 
   Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11413/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] QiangCai opened a new pull request #3211: [WIP] Support configuring Java version

2019-05-09 Thread GitBox
QiangCai opened a new pull request #3211: [WIP] Support configuring Java version
URL: https://github.com/apache/carbondata/pull/3211
 
 
   unify maven target bytecode version for the whole project
   
   For example:
   mvn  install -DskipTests -Pspark-2.2 -Djava.version=1.7
   or 
   mvn  install -DskipTests -Pspark-2.2 -Djava.version=1.8
   
   limitation: 
   presto module need jdk1.8 or later
   
   Be sure to do all of the following checklists to help us incorporate 
   your contribution quickly and easily:
   
- [ ] Any interfaces changed?

- [ ] Any backward compatibility impacted?

- [ ] Document update required?
   
- [ ] Testing done
   Please provide details on 
   - Whether new unit test cases have been added or why no new tests 
are required?
   - How it is tested? Please attach test report.
   - Is it a performance related change? Please attach the performance 
test report.
   - Any additional information to help reviewers in testing this 
change.
  
- [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491138943
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3148/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491036750
 
 
   Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3346/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491029803
 
 
   Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11412/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3184: [CARBONDATA-3357] Support 
TableProperties from single parent table and restrict alter/delete/partition on 
mv
URL: https://github.com/apache/carbondata/pull/3184#issuecomment-491006333
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3147/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix 
GC Overhead limit exceeded issue and partition column as range column issue
URL: https://github.com/apache/carbondata/pull/3210#issuecomment-491002849
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3345/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
Indhumathi27 commented on a change in pull request #3184: [CARBONDATA-3357] 
Support TableProperties from single parent table and restrict 
alter/delete/partition on mv
URL: https://github.com/apache/carbondata/pull/3184#discussion_r282590280
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mv/DataMapListeners.scala
 ##
 @@ -139,3 +148,130 @@ object LoadPostDataMapListener extends 
OperationEventListener {
 }
   }
 }
+
+/**
+ * Listeners to block operations like delete segment on id or by date on tables
+ * having an mv datamap or on mv datamap tables
+ */
+object DataMapDeleteSegmentPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val carbonTable = event match {
+  case e: DeleteSegmentByIdPreEvent =>
+e.asInstanceOf[DeleteSegmentByIdPreEvent].carbonTable
+  case e: DeleteSegmentByDatePreEvent =>
+e.asInstanceOf[DeleteSegmentByDatePreEvent].carbonTable
+}
+if (null != carbonTable) {
+  if (CarbonTable.hasMVDataMap(carbonTable)) {
+throw new UnsupportedOperationException(
+  "Delete segment operation is not supported on tables which have mv 
datamap")
+  }
+  if (DataMapUtil.isMVDataMapTable(carbonTable)) {
+throw new UnsupportedOperationException(
+  "Delete segment operation is not supported on mv table")
+  }
+}
+  }
+}
+
+object DataMapAddColumnsPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val dataTypeChangePreListener = 
event.asInstanceOf[AlterTableAddColumnPreEvent]
+val carbonTable = dataTypeChangePreListener.carbonTable
+if (DataMapUtil.isMVDataMapTable(carbonTable)) {
+  throw new UnsupportedOperationException(
+s"Cannot add columns in MV DataMap table ${
+  carbonTable.getDatabaseName
+}.${ carbonTable.getTableName }")
+}
+  }
+}
+
+
+object DataMapDropColumnPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val dropColumnChangePreListener = 
event.asInstanceOf[AlterTableDropColumnPreEvent]
+val carbonTable = dropColumnChangePreListener.carbonTable
+val alterTableDropColumnModel = 
dropColumnChangePreListener.alterTableDropColumnModel
+val columnsToBeDropped = alterTableDropColumnModel.columns
+if (CarbonTable.hasMVDataMap(carbonTable)) {
+  val dataMapSchemaList = DataMapStoreManager.getInstance
+.getDataMapSchemasOfTable(carbonTable).asScala
+  for (dataMapSchema <- dataMapSchemaList) {
+if (null != dataMapSchema && !dataMapSchema.isIndexDataMap) {
+  val listOfColumns = 
DataMapListeners.getDataMapTableColumns(dataMapSchema, carbonTable)
+  val columnExistsInChild = listOfColumns.collectFirst {
+case parentColumnName if 
columnsToBeDropped.contains(parentColumnName) =>
+  parentColumnName
+  }
+  if (columnExistsInChild.isDefined) {
+throw new UnsupportedOperationException(
+  s"Column ${ columnExistsInChild.head } cannot be dropped because 
it exists " +
+  s"in mv datamap ${ dataMapSchema.getRelationIdentifier.toString 
}")
+  }
+}
+  }
+}
+if (DataMapUtil.isMVDataMapTable(carbonTable)) {
+  throw new UnsupportedOperationException(
+s"Cannot drop columns present in MV datamap table ${ 
carbonTable.getDatabaseName }." +
+s"${ carbonTable.getTableName }")
+}
+  }
+}
+
+object DataMapChangeDataTypeorRenameColumnPreListener
+  extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val colRenameDataTypeChangePreListener = event
+  .asInstanceOf[AlterTableColRenameAndDataTypeChangePreEvent]
+val carbonTable = colRenameDataTypeChangePreListener.carbonTable
+val alterTableDataTypeChangeModel = colRenameDataTypeChangePreListener
+  .alterTableDataTypeChangeModel
+val columnToBeAltered: String = alterTableDataTypeChangeModel.columnName
+if (CarbonTable.hasMVDataMap(carbonTable)) {
+  val dataMapSchemaList = DataMapStoreManager.getInstance
+.getDataMapSchemasOfTable(carbonTable).asScala
+  for (dataMapSchema <- dataMapSchemaList) {
+if (null != dataMapSchema && !dataMapSchema.isIndexDataMap) {
+  val 

[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
Indhumathi27 commented on a change in pull request #3184: [CARBONDATA-3357] 
Support TableProperties from single parent table and restrict 
alter/delete/partition on mv
URL: https://github.com/apache/carbondata/pull/3184#discussion_r282591445
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mv/DataMapListeners.scala
 ##
 @@ -139,3 +148,130 @@ object LoadPostDataMapListener extends 
OperationEventListener {
 }
   }
 }
+
+/**
+ * Listeners to block operations like delete segment on id or by date on tables
+ * having an mv datamap or on mv datamap tables
+ */
+object DataMapDeleteSegmentPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val carbonTable = event match {
+  case e: DeleteSegmentByIdPreEvent =>
+e.asInstanceOf[DeleteSegmentByIdPreEvent].carbonTable
+  case e: DeleteSegmentByDatePreEvent =>
+e.asInstanceOf[DeleteSegmentByDatePreEvent].carbonTable
+}
+if (null != carbonTable) {
+  if (CarbonTable.hasMVDataMap(carbonTable)) {
 
 Review comment:
   Moved to 
integration/spark2/src/main/scala/org/apache/spark/util/DataMapUtil.scala


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
Indhumathi27 commented on a change in pull request #3184: [CARBONDATA-3357] 
Support TableProperties from single parent table and restrict 
alter/delete/partition on mv
URL: https://github.com/apache/carbondata/pull/3184#discussion_r282590217
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mv/DataMapListeners.scala
 ##
 @@ -139,3 +148,130 @@ object LoadPostDataMapListener extends 
OperationEventListener {
 }
   }
 }
+
+/**
+ * Listeners to block operations like delete segment on id or by date on tables
+ * having an mv datamap or on mv datamap tables
+ */
+object DataMapDeleteSegmentPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val carbonTable = event match {
+  case e: DeleteSegmentByIdPreEvent =>
+e.asInstanceOf[DeleteSegmentByIdPreEvent].carbonTable
+  case e: DeleteSegmentByDatePreEvent =>
+e.asInstanceOf[DeleteSegmentByDatePreEvent].carbonTable
+}
+if (null != carbonTable) {
+  if (CarbonTable.hasMVDataMap(carbonTable)) {
+throw new UnsupportedOperationException(
+  "Delete segment operation is not supported on tables which have mv 
datamap")
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix 
GC Overhead limit exceeded issue and partition column as range column issue
URL: https://github.com/apache/carbondata/pull/3210#issuecomment-490968058
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11411/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ravipesala commented on a change in pull request #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
ravipesala commented on a change in pull request #3184: [CARBONDATA-3357] 
Support TableProperties from single parent table and restrict 
alter/delete/partition on mv
URL: https://github.com/apache/carbondata/pull/3184#discussion_r282552675
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mv/DataMapListeners.scala
 ##
 @@ -139,3 +148,130 @@ object LoadPostDataMapListener extends 
OperationEventListener {
 }
   }
 }
+
+/**
+ * Listeners to block operations like delete segment on id or by date on tables
+ * having an mv datamap or on mv datamap tables
+ */
+object DataMapDeleteSegmentPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val carbonTable = event match {
+  case e: DeleteSegmentByIdPreEvent =>
+e.asInstanceOf[DeleteSegmentByIdPreEvent].carbonTable
+  case e: DeleteSegmentByDatePreEvent =>
+e.asInstanceOf[DeleteSegmentByDatePreEvent].carbonTable
+}
+if (null != carbonTable) {
+  if (CarbonTable.hasMVDataMap(carbonTable)) {
+throw new UnsupportedOperationException(
+  "Delete segment operation is not supported on tables which have mv 
datamap")
 
 Review comment:
   Don't put mv name here, just check for child tables and throw exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ravipesala commented on a change in pull request #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
ravipesala commented on a change in pull request #3184: [CARBONDATA-3357] 
Support TableProperties from single parent table and restrict 
alter/delete/partition on mv
URL: https://github.com/apache/carbondata/pull/3184#discussion_r282552816
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mv/DataMapListeners.scala
 ##
 @@ -139,3 +148,130 @@ object LoadPostDataMapListener extends 
OperationEventListener {
 }
   }
 }
+
+/**
+ * Listeners to block operations like delete segment on id or by date on tables
+ * having an mv datamap or on mv datamap tables
+ */
+object DataMapDeleteSegmentPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val carbonTable = event match {
+  case e: DeleteSegmentByIdPreEvent =>
+e.asInstanceOf[DeleteSegmentByIdPreEvent].carbonTable
+  case e: DeleteSegmentByDatePreEvent =>
+e.asInstanceOf[DeleteSegmentByDatePreEvent].carbonTable
+}
+if (null != carbonTable) {
+  if (CarbonTable.hasMVDataMap(carbonTable)) {
 
 Review comment:
   CarbonTable should not have this method, 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ravipesala commented on a change in pull request #3184: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-09 Thread GitBox
ravipesala commented on a change in pull request #3184: [CARBONDATA-3357] 
Support TableProperties from single parent table and restrict 
alter/delete/partition on mv
URL: https://github.com/apache/carbondata/pull/3184#discussion_r282552106
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mv/DataMapListeners.scala
 ##
 @@ -139,3 +148,130 @@ object LoadPostDataMapListener extends 
OperationEventListener {
 }
   }
 }
+
+/**
+ * Listeners to block operations like delete segment on id or by date on tables
+ * having an mv datamap or on mv datamap tables
+ */
+object DataMapDeleteSegmentPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val carbonTable = event match {
+  case e: DeleteSegmentByIdPreEvent =>
+e.asInstanceOf[DeleteSegmentByIdPreEvent].carbonTable
+  case e: DeleteSegmentByDatePreEvent =>
+e.asInstanceOf[DeleteSegmentByDatePreEvent].carbonTable
+}
+if (null != carbonTable) {
+  if (CarbonTable.hasMVDataMap(carbonTable)) {
+throw new UnsupportedOperationException(
+  "Delete segment operation is not supported on tables which have mv 
datamap")
+  }
+  if (DataMapUtil.isMVDataMapTable(carbonTable)) {
+throw new UnsupportedOperationException(
+  "Delete segment operation is not supported on mv table")
+  }
+}
+  }
+}
+
+object DataMapAddColumnsPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val dataTypeChangePreListener = 
event.asInstanceOf[AlterTableAddColumnPreEvent]
+val carbonTable = dataTypeChangePreListener.carbonTable
+if (DataMapUtil.isMVDataMapTable(carbonTable)) {
+  throw new UnsupportedOperationException(
+s"Cannot add columns in MV DataMap table ${
+  carbonTable.getDatabaseName
+}.${ carbonTable.getTableName }")
+}
+  }
+}
+
+
+object DataMapDropColumnPreListener extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val dropColumnChangePreListener = 
event.asInstanceOf[AlterTableDropColumnPreEvent]
+val carbonTable = dropColumnChangePreListener.carbonTable
+val alterTableDropColumnModel = 
dropColumnChangePreListener.alterTableDropColumnModel
+val columnsToBeDropped = alterTableDropColumnModel.columns
+if (CarbonTable.hasMVDataMap(carbonTable)) {
+  val dataMapSchemaList = DataMapStoreManager.getInstance
+.getDataMapSchemasOfTable(carbonTable).asScala
+  for (dataMapSchema <- dataMapSchemaList) {
+if (null != dataMapSchema && !dataMapSchema.isIndexDataMap) {
+  val listOfColumns = 
DataMapListeners.getDataMapTableColumns(dataMapSchema, carbonTable)
+  val columnExistsInChild = listOfColumns.collectFirst {
+case parentColumnName if 
columnsToBeDropped.contains(parentColumnName) =>
+  parentColumnName
+  }
+  if (columnExistsInChild.isDefined) {
+throw new UnsupportedOperationException(
+  s"Column ${ columnExistsInChild.head } cannot be dropped because 
it exists " +
+  s"in mv datamap ${ dataMapSchema.getRelationIdentifier.toString 
}")
+  }
+}
+  }
+}
+if (DataMapUtil.isMVDataMapTable(carbonTable)) {
+  throw new UnsupportedOperationException(
+s"Cannot drop columns present in MV datamap table ${ 
carbonTable.getDatabaseName }." +
+s"${ carbonTable.getTableName }")
+}
+  }
+}
+
+object DataMapChangeDataTypeorRenameColumnPreListener
+  extends OperationEventListener {
+  /**
+   * Called on a specified event occurrence
+   *
+   * @param event
+   * @param operationContext
+   */
+  override def onEvent(event: Event, operationContext: OperationContext): Unit 
= {
+val colRenameDataTypeChangePreListener = event
+  .asInstanceOf[AlterTableColRenameAndDataTypeChangePreEvent]
+val carbonTable = colRenameDataTypeChangePreListener.carbonTable
+val alterTableDataTypeChangeModel = colRenameDataTypeChangePreListener
+  .alterTableDataTypeChangeModel
+val columnToBeAltered: String = alterTableDataTypeChangeModel.columnName
+if (CarbonTable.hasMVDataMap(carbonTable)) {
+  val dataMapSchemaList = DataMapStoreManager.getInstance
+.getDataMapSchemasOfTable(carbonTable).asScala
+  for (dataMapSchema <- dataMapSchemaList) {
+if (null != dataMapSchema && !dataMapSchema.isIndexDataMap) {
+  val 

[GitHub] [carbondata] CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix 
GC Overhead limit exceeded issue and partition column as range column issue
URL: https://github.com/apache/carbondata/pull/3210#issuecomment-490941984
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3146/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (CARBONDATA-3336) Support Binary Data Type

2019-05-09 Thread xubo245 (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xubo245 updated CARBONDATA-3336:

Description: 
CarbonData supports binary data type



Version Changes Owner   Date
0.1 Init doc for Supporting binary data typeXubo2019-4-10

Background :
Binary is basic data type and widely used in various scenarios. So it’s better 
to support binary data type in CarbonData. Download data from S3 will be slow 
when dataset has lots of small binary data. The majority of application 
scenarios are  related to storage small binary data type into CarbonData, which 
can avoid small binary files problem and speed up S3 access performance, also 
can decrease cost of accessing OBS by decreasing the number of calling S3 API. 
It also will easier to manage structure data and Unstructured data(binary) by 
storing them into CarbonData. 

Goals:
1. Supporting write binary data type by Carbon Java SDK.
2. Supporting read binary data type by Spark Carbon file format(carbon 
datasource) and CarbonSession.
3. Supporting read binary data type by Carbon SDK
4. Supporting write binary by spark


Approach and Detail:
1.Supporting write binary data type by Carbon Java SDK [Formal]:
1.1 Java SDK needs support write data with specific data types, 
like int, double, byte[ ] data type, no need to convert all data type to string 
array. User read binary file as byte[], then SDK writes byte[] into binary 
column.=>Done
1.2 CarbonData compress binary column because now the compressor is 
table level.=>Done
=>TODO, support configuration for compress  and no compress, 
default no compress because binary usually is already compressed, like jpg 
format image. So no need to uncompress for binary column. 1.5.4 will support 
column level compression, after that, we can implement no compress for binary. 
We can talk with community.
1.3 CarbonData stores binary as dimension. => Done
1.4 Support configure page size for binary data type because binary 
data usually is big, such as 200k. Otherwise it will be very big for one 
blocklet (32000 rows). =>Done
1.5 Avro, JSON convert need consider
•   AVRO fixed and variable length binary can be supported
=> Avro don't support binary data type => No 
need
 Support read binary from JSON  => done.
1.6 Binay data type as a child columns in Struct, Map   
  
 => support it in the future, but priority is not very 
high, not in 1.5.4
1.7 Verify what is the maximum size of the binary value supportred  
=> snappy only support about 1.71 G, the max data size should be 2 GB, but 
need confirm


2. Supporting read and manage binary data type by Spark Carbon file 
format(carbon DataSource) and CarbonSession.[Formal]
2.1 Supporting read binary data type from non-transaction table, 
read binary column and return as byte[] =>Done
2.2 Support create table with binary column, table property doesn’t 
support sort_columns, dictionary, COLUMN_META_CACHE, RANGE_COLUMN for binary 
column => Done
   => CARBON Datasource don't support dictionary include column
   =>support  carbon.column.compressor= snappy,zstd,gzip for binary, 
compress is for all columns(table level)
2.3 Support CTAS for binary=> transaction/non-transaction,  
Carbon/Hive/Parquet => Done 
2.4 Support external table for binary=> Done
2.5 Support projection for binary column=> Done
2.6 Support desc formatted=> Done
   => Carbon Datasource don't support  ALTER TABLE add columns 
sql
   support  ALTER TABLE for(add column, rename, drop column) 
binary data type in carbon session=> Done
   Don't support change the data type for binary by alter table 
=> Done
2.7 Don’t support PARTITION, BUCKETCOLUMNS  for binary  => Done
2.8 Support compaction for binary=> Done
2.9 datamap
Support bloomfilter,mv and pre-aggregate
Don’t support lucene, timeseries datamap,  no need min max 
datamap for binary
=>Done
2.10 CSDK / python SDK support binary in the future.=> TODO
2.11 Support S3=> Done
2.12 support UDF, hex, base64, cast:.=> TODO
   select hex(bin) from carbon_table..=> TODO
  
2.13 support configurable decode for query, support base64 and Hex 
decode.=> Done
2.15 How big data size binary data type can support for writing and 
reading?=> TODO
2.16 support filter for binary => Done
2.17 select CAST(s AS BINARY) from carbon_table. => Done

[GitHub] [carbondata] ravipesala commented on a change in pull request #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
ravipesala commented on a change in pull request #3210: [CARBONDATA-3375] 
[CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as 
range column issue
URL: https://github.com/apache/carbondata/pull/3210#discussion_r282474383
 
 

 ##
 File path: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
 ##
 @@ -433,75 +442,118 @@ class CarbonMergerRDD[K, V](
 val newRanges = allRanges.filter { range =>
   range != null
 }
-carbonInputSplits.foreach { split =>
-  var dataFileFooter: DataFileFooter = null
-  if (null == rangeColumn) {
-val taskNo = getTaskNo(split, partitionTaskMap, counter)
-var sizeOfSplit = split.getDetailInfo.getBlockSize
-val splitList = taskIdMapping.get(taskNo)
-noOfBlocks += 1
+val noOfSplitsPerTask = Math.ceil(carbonInputSplits.size / 
defaultParallelism)
+var taskCount = 0
+// In case of range column if only one data value is present then we try to
+// divide the splits to different tasks in order to avoid single task 
creation
+// and load on single executor
+if (singleRange) {
+  var filterExpr = CarbonCompactionUtil
 
 Review comment:
   For single range no need to add filter expression.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] qiuchenjian closed pull request #3101: [CARBONDATA-3270] MV support groupby columns don't need be existed in the projection

2019-05-09 Thread GitBox
qiuchenjian closed pull request #3101: [CARBONDATA-3270] MV support groupby 
columns don't need be existed in the projection
URL: https://github.com/apache/carbondata/pull/3101
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] qiuchenjian commented on issue #3101: [CARBONDATA-3270] MV support groupby columns don't need be existed in the projection

2019-05-09 Thread GitBox
qiuchenjian commented on issue #3101: [CARBONDATA-3270] MV support groupby 
columns don't need be existed in the projection
URL: https://github.com/apache/carbondata/pull/3101#issuecomment-490894087
 
 
   @ravipesala @kevinjmh  OK,  close this PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (CARBONDATA-3371) Compaction show ArrayIndexOutOfBoundsException after sort_columns modification

2019-05-09 Thread Ravindra Pesala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-3371.
-
   Resolution: Fixed
Fix Version/s: 1.5.4

> Compaction show ArrayIndexOutOfBoundsException after sort_columns modification
> --
>
> Key: CARBONDATA-3371
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3371
> Project: CarbonData
>  Issue Type: Bug
>Reporter: QiangCai
>Assignee: QiangCai
>Priority: Major
> Fix For: 1.5.4
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> 2019-05-05 15:26:39 ERROR DataTypeUtil:619 - Cannot convert� Z�w} to SHORT 
> type valueWrong length: 8, expected 2
> 2019-05-05 15:26:39 ERROR DataTypeUtil:621 - Problem while converting data 
> type� Z�w} 
> 2019-05-05 15:26:39 ERROR CompactionResultSortProcessor:185 - 3
> java.lang.ArrayIndexOutOfBoundsException: 3
>  at 
> org.apache.carbondata.core.scan.wrappers.ByteArrayWrapper.getNoDictionaryKeyByIndex(ByteArrayWrapper.java:81)
>  at 
> org.apache.carbondata.processing.merger.CompactionResultSortProcessor.prepareRowObjectForSorting(CompactionResultSortProcessor.java:332)
>  at 
> org.apache.carbondata.processing.merger.CompactionResultSortProcessor.processResult(CompactionResultSortProcessor.java:250)
>  at 
> org.apache.carbondata.processing.merger.CompactionResultSortProcessor.execute(CompactionResultSortProcessor.java:175)
>  at 
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.(CarbonMergerRDD.scala:226)
>  at 
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:84)
>  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:82)
>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>  at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>  at org.apache.spark.scheduler.Task.run(Task.scala:108)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-05-05 15:26:39 ERROR CarbonMergerRDD:233 - Compaction Failed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [carbondata] asfgit closed pull request #3201: [CARBONDATA-3371] Fix ArrayIndexOutOfBoundsException of compaction after sort_columns modification

2019-05-09 Thread GitBox
asfgit closed pull request #3201: [CARBONDATA-3371] Fix 
ArrayIndexOutOfBoundsException of compaction after sort_columns modification
URL: https://github.com/apache/carbondata/pull/3201
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ravipesala commented on issue #3201: [CARBONDATA-3371] Fix ArrayIndexOutOfBoundsException of compaction after sort_columns modification

2019-05-09 Thread GitBox
ravipesala commented on issue #3201: [CARBONDATA-3371] Fix 
ArrayIndexOutOfBoundsException of compaction after sort_columns modification
URL: https://github.com/apache/carbondata/pull/3201#issuecomment-490856989
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix 
GC Overhead limit exceeded issue and partition column as range column issue
URL: https://github.com/apache/carbondata/pull/3210#issuecomment-490854563
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3344/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix 
GC Overhead limit exceeded issue and partition column as range column issue
URL: https://github.com/apache/carbondata/pull/3210#issuecomment-490847684
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11410/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] zhxiaoping commented on a change in pull request #3072: [CARBONDATA-3247] Support to select all columns when creating MV datamap

2019-05-09 Thread GitBox
zhxiaoping commented on a change in pull request #3072: [CARBONDATA-3247] 
Support to select all columns when creating MV datamap
URL: https://github.com/apache/carbondata/pull/3072#discussion_r282318254
 
 

 ##
 File path: 
datamap/mv/plan/src/main/scala/org/apache/carbondata/mv/plans/util/BirdcageOptimizer.scala
 ##
 @@ -132,7 +133,9 @@ object BirdcageOptimizer extends RuleExecutor[LogicalPlan] 
{
 Batch(
   "RewriteSubquery", Once,
   RewritePredicateSubquery,
-  CollapseProject) :: Nil
+  CollapseProject) ::
+Batch(
+  "MVProjectAdd", Once, MVProjectColumnsAdd)::Nil
 
 Review comment:
   after https://github.com/apache/carbondata/pull/3072#issuecomment-463446314 
done and then add one 
`case g@modular.GroupBy(_, _, _, _, s@modular.Select(_, _, _, _, _, _, _, 
_, _, _), _, _, _) `
   to the function extractSimpleOperator, it works. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix GC Overhead limit exceeded issue and partition column as range column issue

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3210: [CARBONDATA-3375] [CARBONDATA-3376] Fix 
GC Overhead limit exceeded issue and partition column as range column issue
URL: https://github.com/apache/carbondata/pull/3210#issuecomment-490826818
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3145/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (CARBONDATA-3376) Table containing Range Column as Partition Column fails Compaction

2019-05-09 Thread MANISH NALLA (JIRA)
MANISH NALLA created CARBONDATA-3376:


 Summary: Table containing Range Column as Partition Column fails 
Compaction
 Key: CARBONDATA-3376
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3376
 Project: CarbonData
  Issue Type: Bug
Reporter: MANISH NALLA


When the range col is given as partitioned by column then compaction is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-3375) GC Overhead limit exceeded error for huge data in Range Compaction

2019-05-09 Thread MANISH NALLA (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MANISH NALLA updated CARBONDATA-3375:
-
Summary: GC Overhead limit exceeded error for huge data in Range Compaction 
 (was: GC Overhead limit exceeded error for huge data)

> GC Overhead limit exceeded error for huge data in Range Compaction
> --
>
> Key: CARBONDATA-3375
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3375
> Project: CarbonData
>  Issue Type: Bug
>Reporter: MANISH NALLA
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When only single data item is present then it will be launched as one single 
> task wich results in one executor getting overloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [carbondata] CarbonDataQA commented on issue #3207: [CARBONDATA-3374] Optimize documentation and fix some spell errors.

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3207: [CARBONDATA-3374] Optimize documentation 
and fix some spell errors.
URL: https://github.com/apache/carbondata/pull/3207#issuecomment-490820781
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11409/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3207: [CARBONDATA-3374] Optimize documentation and fix some spell errors.

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3207: [CARBONDATA-3374] Optimize documentation 
and fix some spell errors.
URL: https://github.com/apache/carbondata/pull/3207#issuecomment-490807220
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3343/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3206: [CARBONDATA-3362] Document update for pagesize table property scenario

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3206: [CARBONDATA-3362] Document update for 
pagesize table property scenario
URL: https://github.com/apache/carbondata/pull/3206#issuecomment-490790182
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11408/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3206: [CARBONDATA-3362] Document update for pagesize table property scenario

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3206: [CARBONDATA-3362] Document update for 
pagesize table property scenario
URL: https://github.com/apache/carbondata/pull/3206#issuecomment-490778057
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3342/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3207: [CARBONDATA-3374] Optimize documentation and fix some spell errors.

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3207: [CARBONDATA-3374] Optimize documentation 
and fix some spell errors.
URL: https://github.com/apache/carbondata/pull/3207#issuecomment-490777388
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3144/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3177: [CARBONDATA-3337][CARBONDATA-3306] Distributed index server

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3177: [CARBONDATA-3337][CARBONDATA-3306] 
Distributed index server
URL: https://github.com/apache/carbondata/pull/3177#issuecomment-490777132
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3341/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] xubo245 commented on issue #3207: [CARBONDATA-3374] Optimize documentation and fix some spell errors.

2019-05-09 Thread GitBox
xubo245 commented on issue #3207: [CARBONDATA-3374] Optimize documentation and 
fix some spell errors.
URL: https://github.com/apache/carbondata/pull/3207#issuecomment-490771828
 
 
   retest this please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3177: [CARBONDATA-3337][CARBONDATA-3306] Distributed index server

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3177: [CARBONDATA-3337][CARBONDATA-3306] 
Distributed index server
URL: https://github.com/apache/carbondata/pull/3177#issuecomment-490758739
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/11407/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3206: [CARBONDATA-3362] Document update for pagesize table property scenario

2019-05-09 Thread GitBox
CarbonDataQA commented on issue #3206: [CARBONDATA-3362] Document update for 
pagesize table property scenario
URL: https://github.com/apache/carbondata/pull/3206#issuecomment-490757989
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3143/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services