[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-18 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r305198230
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala
 ##
 @@ -178,6 +171,8 @@ object IndexServer extends ServerInterface {
   server.stop()
 }
   })
+  CarbonProperties.getInstance().addProperty(CarbonCommonConstants
 
 Review comment:
   Yeah, but this is added as a flag to tell the common logic that index-server 
was enabled. The user can still use set to control per table


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-18 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r305198086
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,226 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently prunes and caches all block/blocklet datamap index 
information into the driver for
+normal table, for Bloom/Index datamaps the JDBC driver will launch a job to 
prune and cache the
+datamaps in executors.
+
+This causes the driver to become a bottleneck in the following ways:
+1. If the cache size becomes huge(70-80% of the driver memory) then there can 
be excessive GC in
+the driver which can slow down the query and the driver may even go 
OutOfMemory.
+2. LRU has to evict a lot of elements from the cache to accommodate the new 
objects which would
+in turn slow down the queries.
+3. For bloom there is no guarantee that the next query goes to the same 
executor to reuse the cache
+and hence cache could be duplicated in multiple executors.
+
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
service in form of
+a request. The request will consist of the table name, segments, filter 
expression and other
+information used for pruning.
+
+In IndexServer service a pruning RDD is fired which will take care of the 
pruning for that
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning. Refer: 
[query-data-with-specified-segments](https://github.com/apache/carbondata/blob/6e50c1c6fc1d6e82a4faf6dc6e0824299786ccc0/docs/segment-management-on-carbondata.md#query-data-with-specified-segments).
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the
+available executors and keeping track of the executor where the segment is 
cached.
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment(version: 1.1) the index size is not 
available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+Refer: 
[MetaCacheDDL](https://github.com/apache/carbondata/blob/master/docs/ddl-of-carbondata.md#cache)
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired after pruning to clear 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-18 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r305198024
 
 

 ##
 File path: 
integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala
 ##
 @@ -178,6 +171,8 @@ object IndexServer extends ServerInterface {
   server.stop()
 }
   })
+  CarbonProperties.getInstance().addProperty(CarbonCommonConstants
+.CARBON_ENABLE_INDEX_SERVER, "true")
   LOGGER.info(s"Index cache server running on ${ server.getPort } port")
 
 Review comment:
   Exception is already thrown in 
org.apache.carbondata.core.util.CarbonProperties#getIndexServerPort


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303736131
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303736104
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731363
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731403
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731446
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731456
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731579
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731377
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731258
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
 
 Review comment:
   index server is a spark-submit application. It cannot accept sql commands


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731282
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show metacache DDL has a new column called cache location will indicate 
whether the cache is
+from executor or driver. To drop cache the user has to enable/disable the 
index server using the
+dynamic configuration to clear the cache of the desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
 
 Review comment:
   changed the line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731159
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
 
 Review comment:
   distribution logic is already mentioned above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731106
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
 
 Review comment:
   added version


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731088
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303730940
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303730996
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
 
 Review comment:
   added link to set segments doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303731022
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
 
 Review comment:
   changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-15 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r303730835
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,238 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
 
 Review comment:
   changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-12 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r302915771
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,231 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
 
 Review comment:
   changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-12 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r302915735
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,231 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-12 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r302915756
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,231 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-12 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r302915788
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,231 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
 
 Review comment:
   changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] kunal642 commented on a change in pull request #3294: [CARBONDATA-3462][DOC]Added documentation for index server

2019-07-03 Thread GitBox
kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r299929028
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,216 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false