[jira] [Created] (CARBONDATA-3463) File not found exception when select filter query executed with Index server running

2019-07-03 Thread Chetan Bhat (JIRA)
Chetan Bhat created CARBONDATA-3463:
---

 Summary: File not found exception when select filter query 
executed with Index server running
 Key: CARBONDATA-3463
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3463
 Project: CarbonData
  Issue Type: Bug
  Components: other
Affects Versions: 1.6.0
 Environment: Spark 2.1
Reporter: Chetan Bhat


Steps :

Index server is running.

Create table and load data.

0: jdbc:hive2://10.19.91.221:22550/default> create table brinjal (imei 
string,AMSize string,channelsId string,ActiveCountry string, Activecity 
string,gamePointId double,deviceInformationId double,productionDate 
Timestamp,deliveryDate timestamp,deliverycharge double) STORED BY 
'org.apache.carbondata.format' 
TBLPROPERTIES('table_blocksize'='1','SORT_SCOPE'='LOCAL_SORT','carbon.column.compressor'='zstd');
+-+--+
| Result |
+-+--+
+-+--+
No rows selected (1.757 seconds)
0: jdbc:hive2://10.19.91.221:22550/default> LOAD DATA INPATH 
'hdfs://hacluster/chetan/vardhandaterestruct.csv' INTO TABLE brinjal 
OPTIONS('DELIMITER'=',', 'QUOTECHAR'= 
'"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge');
+-+--+
| Result |
+-+--+
+-+--+
No rows selected (5.349 seconds)

Issue : Select filter query fails with file not found exception.

0: jdbc:hive2://10.19.91.221:22550/default> select * from brinjal where 
ActiveCountry ='Chinese' or channelsId =4;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0 in stage 3061.0 failed 4 times, most recent failure: Lost task 0.3 in stage 
3061.0 (TID 134228, linux-220, executor 1): java.io.FileNotFoundException: File 
does not exist: 
/user/hive/warehouse/carbon.store/1_6_0/brinjal/Fact/Part0/Segment_0/part-0-0_batchno0-0-0-1560934784938.carbondata
 at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:74)
 at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:648)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1736)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:712)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:402)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:973)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2260)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2256)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299929028
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,216 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false 

[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508065938
 
 
   Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3965/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3294: [CARBONDATA-3462][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#issuecomment-508055299
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3963/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3294: [CARBONDATA-3462][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#issuecomment-508054145
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12027/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299895885
 
 

 ##
 File path: 
core/src/main/java/org/apache/carbondata/core/scan/expression/conditional/FilterUtil.java
 ##
 @@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.scan.expression.conditional;
+
+import java.util.List;
+
+import org.apache.carbondata.core.metadata.datatype.*;
+import org.apache.carbondata.core.scan.expression.ColumnExpression;
+import org.apache.carbondata.core.scan.expression.Expression;
+import org.apache.carbondata.core.scan.expression.LiteralExpression;
+import org.apache.carbondata.core.scan.expression.logical.OrExpression;
+
+
+/**
+ * provide function to prepare expression for filter
+ */
+public class FilterUtil {
+  public static Expression prepareEqualToExpression(String columnName, String 
dataType,
+  Object value) {
+if (DataTypes.STRING.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(value, DataTypes.STRING));
+} else if (DataTypes.INT.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.INT),
+  new LiteralExpression(value, DataTypes.INT));
+} else if (DataTypes.DOUBLE.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.DOUBLE),
+  new LiteralExpression(value, DataTypes.DOUBLE));
+} else if (DataTypes.FLOAT.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.FLOAT),
+  new LiteralExpression(value, DataTypes.FLOAT));
+} else if (DataTypes.SHORT.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.SHORT),
+  new LiteralExpression(value, DataTypes.SHORT));
+} else if (DataTypes.BINARY.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.BINARY),
+  new LiteralExpression(value, DataTypes.BINARY));
+} else if (DataTypes.DATE.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.DATE),
+  new LiteralExpression(value, DataTypes.DATE));
+} else if (DataTypes.LONG.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.LONG),
+  new LiteralExpression(value, DataTypes.LONG));
+} else if (DataTypes.TIMESTAMP.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.TIMESTAMP),
+  new LiteralExpression(value, DataTypes.TIMESTAMP));
+} else if (DataTypes.BYTE.getName().equalsIgnoreCase(dataType)) {
+  return new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.BYTE),
+  new LiteralExpression(value, DataTypes.BYTE));
+} else {
+  throw new IllegalArgumentException("Unsupported data type: " + dataType);
+}
+  }
+
+  public static Expression prepareEqualToExpressionSet(String columnName, 
DataType dataType,
+  List values) {
 
 Review comment:
   I think each object can be different data type. your set handles only same 
data type columns, thats not good. make it generic
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299894253
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(null, dataType));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(0), dataType));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(i), dataType));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, String dataType, 
List values) {
+if (DataTypes.STRING.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.STRING, values);
+} else if (DataTypes.INT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.INT, values);
+} else if (DataTypes.DOUBLE.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.DOUBLE, values);
+} else if (DataTypes.FLOAT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.FLOAT, values);
+} else if (DataTypes.SHORT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.SHORT, values);
+} else if (DataTypes.BINARY.getName().equalsIgnoreCase(dataType)) {
 
 Review comment:
   have you tested ?, without min max storing what is the use of filtering ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (CARBONDATA-3462) Add usage and deployment document for index server

2019-07-03 Thread Kunal Kapoor (JIRA)
Kunal Kapoor created CARBONDATA-3462:


 Summary: Add usage and deployment document for index server
 Key: CARBONDATA-3462
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3462
 Project: CarbonData
  Issue Type: Sub-task
Reporter: Kunal Kapoor
Assignee: Kunal Kapoor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508039000
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3753/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508036781
 
 
   Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12029/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon 
SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299885591
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(null, dataType));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(0), dataType));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(i), dataType));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, String dataType, 
List values) {
+if (DataTypes.STRING.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.STRING, values);
+} else if (DataTypes.INT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.INT, values);
+} else if (DataTypes.DOUBLE.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.DOUBLE, values);
+} else if (DataTypes.FLOAT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.FLOAT, values);
+} else if (DataTypes.SHORT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.SHORT, values);
+} else if (DataTypes.BINARY.getName().equalsIgnoreCase(dataType)) {
 
 Review comment:
   filter already support binary


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508035978
 
 
   Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3964/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508035678
 
 
   Build Failed  with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12028/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508035216
 
 
   Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3752/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508034781
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12026/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3294: [WIP][DOC]Added documentation for index 
server
URL: https://github.com/apache/carbondata/pull/3294#issuecomment-508034840
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3751/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3316: [CARBONDATA-3460] Fixed EOFException in CarbonScanRDD

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3316: [CARBONDATA-3460] 
Fixed EOFException in CarbonScanRDD
URL: https://github.com/apache/carbondata/pull/3316#discussion_r299880217
 
 

 ##
 File path: 
core/src/main/java/org/apache/carbondata/hadoop/CarbonInputSplit.java
 ##
 @@ -359,7 +361,11 @@ public Segment getSegment() {
   this.length = in.readLong();
   this.version = ColumnarFormatVersion.valueOf(in.readShort());
   this.rowCount = in.readInt();
-  this.writeDeleteDelta = in.readBoolean();
+  int numberOfDeleteDeltaFiles = in.readInt();
+  deleteDeltaFiles = new String[numberOfDeleteDeltaFiles];
 
 Review comment:
   Initialize deleteDeltaFiles, only if numberOfDeleteDeltaFiles != 0. Else 
Array size zero will be present and that's not a safe way of coding.
   
   we cannot expect get method to check length of array, user may check just 
for null also.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299880029
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,216 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299880055
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299880087
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299880136
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299880118
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299880072
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879903
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879928
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879758
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879732
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+
+## Configurations
+
+# carbon.properties 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable the use of index server 
for pruning|
+| carbon.index.server.ip |NA   |   Specify the IP/HOST on which the server 
would be started. Better to specify the private IP. | 
+| carbon.index.server.port | NA | The port on which the index server has to be 
started. |
+| carbon.disable.index.server.fallback | false | Whether to enable/disable 
fallback for index server. Should be used for testing purposes only |
+|carbon.index.server.max.worker.threads| 500 | Number of RPC handlers to open 
for accepting the requests from JDBC 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879716
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879661
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+
+## Configurations
+
+# carbon.properties 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable the use of index server 
for pruning|
+| carbon.index.server.ip |NA   |   Specify the IP/HOST on which the server 
would be started. Better to specify the private IP. | 
+| carbon.index.server.port | NA | The port on which the index server has to be 
started. |
+| carbon.disable.index.server.fallback | false | Whether to enable/disable 
fallback for index server. Should be used for testing purposes only |
+|carbon.index.server.max.worker.threads| 500 | Number of RPC handlers to open 
for accepting the requests from JDBC 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879696
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+
+## Configurations
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299879627
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,128 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+
+## Configurations
+
+# carbon.properties 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | Enable the use of index server 
for pruning|
+| carbon.index.server.ip |NA   |   Specify the IP/HOST on which the server 
would be started. Better to specify the private IP. | 
+| carbon.index.server.port | NA | The port on which the index server has to be 
started. |
+| carbon.disable.index.server.fallback | false | Whether to enable/disable 
fallback for index server. Should be used for testing purposes only |
+|carbon.index.server.max.worker.threads| 500 | Number of RPC handlers to open 
for accepting the requests from JDBC 

[GitHub] [carbondata] xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon 
SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299879460
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon 
SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299879490
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(null, dataType));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(0), dataType));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(i), dataType));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, String dataType, 
List values) {
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon 
SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299876892
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter equal values set.

2019-07-03 Thread GitBox
xubo245 commented on a change in pull request #3317: [CARBONDATA-3461] Carbon 
SDK support filter equal values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299876914
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3313: [CARBONDATA-3458] Setting Spark Execution Id to null only for Spark version 2.2 and below.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3313: [CARBONDATA-3458] Setting Spark 
Execution Id to null only for Spark version 2.2 and below.
URL: https://github.com/apache/carbondata/pull/3313#issuecomment-508021750
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3961/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3313: [CARBONDATA-3458] Setting Spark Execution Id to null only for Spark version 2.2 and below.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3313: [CARBONDATA-3458] Setting Spark 
Execution Id to null only for Spark version 2.2 and below.
URL: https://github.com/apache/carbondata/pull/3313#issuecomment-508020377
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12025/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508014966
 
 
   @xubo245 : I see changes only for Equal to filter. so change description to 
support Equal to filter set or handle all the filter expressions. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299856650
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(null, dataType));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(0), dataType));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(i), dataType));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, String dataType, 
List values) {
+if (DataTypes.STRING.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.STRING, values);
+} else if (DataTypes.INT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.INT, values);
+} else if (DataTypes.DOUBLE.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.DOUBLE, values);
+} else if (DataTypes.FLOAT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.FLOAT, values);
+} else if (DataTypes.SHORT.getName().equalsIgnoreCase(dataType)) {
+  return filter(columnName, DataTypes.SHORT, values);
+} else if (DataTypes.BINARY.getName().equalsIgnoreCase(dataType)) {
 
 Review comment:
   Include the datatype that supports the filter. Binary cannot support filter 
as we don't save min max for binary column.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299855580
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(null, dataType));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(0), dataType));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, dataType),
+  new LiteralExpression(values.get(i), dataType));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, String dataType, 
List values) {
 
 Review comment:
   rename to **prepareEqualToExpressionSet**


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299855195
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
 
 Review comment:
   no need of this, use generic method that takes object instead of strings 
(below method)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299854656
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
 
 Review comment:
   rename to **`prepareEqualToExpressionSet`** and reuse 
**`prepareEqualToExpression`** to prepare each expressions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299854656
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
+Expression expression = null;
+if (0 == values.size()) {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(null, DataTypes.STRING));
+} else {
+  expression = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(0), DataTypes.STRING));
+}
+for (int i = 1; i < values.size(); i++) {
+  Expression expression2 = new EqualToExpression(
+  new ColumnExpression(columnName, DataTypes.STRING),
+  new LiteralExpression(values.get(i), DataTypes.STRING));
+  expression = new OrExpression(expression, expression2);
+}
+this.filterExpression = expression;
+return this;
+  }
+
+  private CarbonReaderBuilder filter(String columnName, DataType dataType,
+ List values) {
 
 Review comment:
   rename to prepareEqualToExpressionSet and reuse prepareEqualToExpression to 
prepare each expressions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299852754
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
 
 Review comment:
   same comment as above.
   
   rename to **`prepareEqualToExpressionSet`** and reuse 
prepareEqualToExpression to prepare each expressions
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299852754
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
+EqualToExpression equalToExpression = new EqualToExpression(
+new ColumnExpression(columnName, DataTypes.STRING),
+new LiteralExpression(value, DataTypes.STRING));
+this.filterExpression = equalToExpression;
+return this;
+  }
+
+  public CarbonReaderBuilder filter(String columnName, List values) {
 
 Review comment:
   same comment as above.
   
   rename to **`prepareEqualToExpressionSet`** and reuse 
prepareEqualToExpression to prepare each expressions
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
ajantha-bhat commented on a change in pull request #3317: [CARBONDATA-3461] 
Carbon SDK support filter values set.
URL: https://github.com/apache/carbondata/pull/3317#discussion_r299852035
 
 

 ##
 File path: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 ##
 @@ -170,6 +176,75 @@ public CarbonReaderBuilder filter(Expression 
filterExpression) {
 return this;
   }
 
+  public CarbonReaderBuilder filter(String columnName, String value) {
 
 Review comment:
   this always creates equal to filter, so keeping filter as name is not good.
   
   I suggest change it to **`public static Expression 
prepareEqualToExpression(String columnName, String value)`**
   
   so get the equalto filter expression from above and then pass it to existing 
filter() method. This way it can support for all the filter expression.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on issue #3313: [CARBONDATA-3458] Setting Spark Execution Id to null only for Spark version 2.2 and below.

2019-07-03 Thread GitBox
ajantha-bhat commented on issue #3313: [CARBONDATA-3458] Setting Spark 
Execution Id to null only for Spark version 2.2 and below.
URL: https://github.com/apache/carbondata/pull/3313#issuecomment-508006752
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3317: [CARBONDATA-3461] Carbon SDK support 
filter values set.
URL: https://github.com/apache/carbondata/pull/3317#issuecomment-508004497
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3750/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] ajantha-bhat commented on issue #3307: [CARBONDATA-3453] Fix set segment issue in adaptive execution

2019-07-03 Thread GitBox
ajantha-bhat commented on issue #3307: [CARBONDATA-3453] Fix set segment issue 
in adaptive execution
URL: https://github.com/apache/carbondata/pull/3307#issuecomment-508001154
 
 
   @ravipesala / @kunal642 / @kumarvishal09 please merge this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] xubo245 opened a new pull request #3317: [CARBONDATA-3461] Carbon SDK support filter values set.

2019-07-03 Thread GitBox
xubo245 opened a new pull request #3317: [CARBONDATA-3461] Carbon SDK support 
filter values set.
URL: https://github.com/apache/carbondata/pull/3317
 
 
   Be sure to do all of the following checklist to help us incorporate 
   your contribution quickly and easily:
   
- [ ] Any interfaces changed?
Add new interfaces
- [ ] Any backward compatibility impacted?
No
- [ ] Document update required?
   No
- [ ] Testing done
added
- [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
   NO
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (CARBONDATA-3461) Carbon SDK support filter values set.

2019-07-03 Thread xubo245 (JIRA)
xubo245 created CARBONDATA-3461:
---

 Summary: Carbon SDK support filter values set.
 Key: CARBONDATA-3461
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3461
 Project: CarbonData
  Issue Type: New Feature
Reporter: xubo245
Assignee: xubo245


Carbon SDK support filter values set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [carbondata] CarbonDataQA commented on issue #3313: [CARBONDATA-3458] Setting Spark Execution Id to null only for Spark version 2.2 and below.

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3313: [CARBONDATA-3458] Setting Spark 
Execution Id to null only for Spark version 2.2 and below.
URL: https://github.com/apache/carbondata/pull/3313#issuecomment-507995385
 
 
   Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3749/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3316: [CARBONDATA-3460] Fixed EOFException in CarbonScanRDD

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3316: [CARBONDATA-3460] Fixed EOFException in 
CarbonScanRDD
URL: https://github.com/apache/carbondata/pull/3316#issuecomment-507994472
 
 
   Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3959/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3309: [CARBONDATA-3455] Job Group ID is not displayed for the IndexServer Jobs

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3309: [CARBONDATA-3455] Job Group ID is not 
displayed for the IndexServer Jobs
URL: https://github.com/apache/carbondata/pull/3309#issuecomment-507970081
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12024/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3309: [CARBONDATA-3455] Job Group ID is not displayed for the IndexServer Jobs

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3309: [CARBONDATA-3455] Job Group ID is not 
displayed for the IndexServer Jobs
URL: https://github.com/apache/carbondata/pull/3309#issuecomment-507967438
 
 
   Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3960/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] CarbonDataQA commented on issue #3316: [CARBONDATA-3460] Fixed EOFException in CarbonScanRDD

2019-07-03 Thread GitBox
CarbonDataQA commented on issue #3316: [CARBONDATA-3460] Fixed EOFException in 
CarbonScanRDD
URL: https://github.com/apache/carbondata/pull/3316#issuecomment-507963665
 
 
   Build Success with Spark 2.3.2, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/12023/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] shivamasn commented on a change in pull request #3313: [CARBONDATA-3458] Setting Spark Execution Id to null only for Spark version 2.2 and below.

2019-07-03 Thread GitBox
shivamasn commented on a change in pull request #3313: [CARBONDATA-3458] 
Setting Spark Execution Id to null only for Spark version 2.2 and below.
URL: https://github.com/apache/carbondata/pull/3313#discussion_r299797461
 
 

 ##
 File path: 
integration/spark-datasource/src/main/scala/org/apache/spark/util/SparkUtil.scala
 ##
 @@ -57,4 +59,12 @@ object SparkUtil {
 isSparkVersionXandAbove(xVersion, true)
   }
 
+  // "spark.sql.execution.id is already set" exception will be
+  // thrown if not set to null in spark2.2 and below versions
+  def setSparkExecutionId(sparkSession: SparkSession, executionId: String): 
Unit = {
 
 Review comment:
   @ravipesala Handled the comments ..please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services