[GitHub] [carbondata] sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-06-19 Thread GitBox
sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r295237010
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | 

[GitHub] [carbondata] sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-06-19 Thread GitBox
sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r295253133
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | 

[GitHub] [carbondata] sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-06-19 Thread GitBox
sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r295253031
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | 

[GitHub] [carbondata] sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-06-19 Thread GitBox
sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r295251729
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | 

[GitHub] [carbondata] sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-06-19 Thread GitBox
sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r295237306
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false | 

[GitHub] [carbondata] sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added documentation for index server

2019-06-19 Thread GitBox
sgururajshetty commented on a change in pull request #3294: [WIP][DOC]Added 
documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r295252229
 
 

 ##
 File path: docs/index-server.md
 ##
 @@ -0,0 +1,204 @@
+
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+
+## Configurations
+
+# carbon.properties(JDBCServer) 
+
+| Name |  Default Value|  Description |
+|:--:|:-:|:--:   |
+| carbon.enable.index.server   |  false |