kunal642 commented on a change in pull request #3294: 
[CARBONDATA-3462][DOC]Added documentation for index server
URL: https://github.com/apache/carbondata/pull/3294#discussion_r302915735
 
 

 ##########
 File path: docs/index-server.md
 ##########
 @@ -0,0 +1,231 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
+-->
+
+# Distributed Index Server
+
+## Background
+
+Carbon currently caches all block/blocklet datamap index information into the 
driver. For bloom
+datamap, it can prune the splits in a distributed way. In the first case, 
there are limitations 
+like driver memory scale up and cache sharing between multiple applications is 
not possible. In 
+the second case, there are limitations like, there is
+no guarantee that the next query goes to the same executor to reuse the cache 
and hence cache 
+would be duplicated in multiple executors. 
+Distributed Index Cache Server aims to solve the above mentioned problems.
+
+## Distribution
+When enabled, any query on a carbon table will be routed to the index server 
application using 
+the Hadoop RPC framework in form of a request. The request will consist of the 
table name, segments,
+filter expression and other information used for pruning.
+
+In IndexServer application a pruning RDD is fired which will take care of the 
pruning for that 
+request. This RDD will be creating tasks based on the number of segments that 
are applicable for 
+pruning. It can happen that the user has specified segments to access for that 
table, so only the
+specified segments would be applicable for pruning.
+
+IndexServer driver would have 2 important tasks, distributing the segments 
equally among the 
+available executors and keeping track of the cache location(where the segment 
cache is present).
+
+To achieve this 2 separate mappings would be maintained as follows.
+1. segment to executor location:
+This mapping will be maintained for each table and will enable the index 
server to track the 
+cache location for each segment.
+```
+tableToExecutorMapping = Map(tableName -> Map(segmentNo -> 
uniqueExecutorIdentifier))
+```
+2. Cache size held by each executor: 
+    This mapping will be used to distribute the segments equally(on the basis 
of size) among the 
+    executors.
+```
+executorToCacheMapping = Map(HostAddress -> Map(ExecutorId -> cacheSize))
+```
+  
+Once a request is received each segment would be iterated over and
+checked against tableToExecutorMapping to find if a executor is already
+assigned. If a mapping already exists then it means that most
+probably(if not evicted by LRU) the segment is already cached in that
+executor and the task for that segment has to be fired on this executor.
+
+If mapping is not found then first check executorToCacheMapping against
+the available executor list to find if any unassigned executor is
+present and use that executor for the current segment. If all the
+executors are assigned with some segment then find the least loaded
+executor on the basis of size.
+
+Initially the segment index size would be used to distribute the
+segments fairly among the executor because the actual cache size would
+be known to the driver only when the segments are cached and appropriate
+information is returned to the driver.
+
+**NOTE:** In case of legacy segment the index size if not available
+therefore all the legacy segments would be processed in a round robin
+fashion.
+
+After the job is completed the tasks would return the cache size held by
+each executor which would be updated to the executorToCacheMapping and
+the pruned blocklets which would be further used for result fetching.
+
+## Reallocation of executor
+In case executor(s) become dead/unavailable then the segments that were
+earlier being handled by those would be reassigned to some other
+executor using the distribution logic.
+
+**Note:** Cache loading would be done again in the new executor for the
+current query.
+
+## MetaCache DDL
+The show/drop metacache DDL have been modified to operate on the
+executor side cache as well. So when the used fires show cache a new
+column called cache location will indicate whether the cache is from
+executor or driver. For drop cache the user has to enable/disable the
+index server using the dynamic configuration to clear the cache of the
+desired location.
+
+## Fallback
+In case of any failure the index server would fallback to embedded mode
+which means that the JDBCServer would take care of distributed pruning.
+A similar job would be fired by the JDBCServer which would take care of
+pruning using its own executors. If for any reason the embedded mode
+also fails to prune the datamaps then the job would be passed on to
+driver.
+
+**NOTE:** In case of embedded mode a job would be fired to clear the
+cache as data cached in JDBCServer executors would be of no use.
+
+## Writing splits to a file
+If the response is too huge then it is better to write the splits to a file so 
that the driver can
+read this file and create the splits. This can be controlled using the 
property 'carbon.index.server
+.inmemory.serialization.threshold.inKB'. By default, the minimum value for 
this property is 0,
+meaning that no matter how small the splits are they would be written to the 
file. Maximum is
+102400KB which will mean if the size of the splits for a executor cross this 
value then they would
+be written to file.
+
+The user can set th location for these file by using 
'carbon.indexserver.temp.path'. By default
+table path would be used to write the files.
+
+## Configurations
+
+##### carbon.properties(JDBCServer) 
+
+| Name     |      Default Value    |  Description |
+|:----------:|:-------------:|:------:       |
+| carbon.enable.index.server       |  false | Enable the use of index server 
for pruning for the whole application.       |
+| carbon.index.server.ip |    NA   |   Specify the IP/HOST on which the server 
would be started. Better to specify the private IP. | 
+| carbon.index.server.port | NA | The port on which the index server has to be 
started. |
+| carbon.disable.index.server.fallback | false | Whether to enable/disable 
fallback for index server
+. Should be used for testing purposes only. |
+|carbon.index.server.max.worker.threads| 500 | Number of RPC handlers to open 
for accepting the
+requests from JDBC driver. Max accepted value is Integer.Max. |
+|carbon.index.server.max.jobname.length|NA|The max length of the job to show 
in the index server application UI. For bigger queries this may impact 
performance as the whole string would be sent from JDBCServer to IndexServer.|
+
+
+##### carbon.properties(IndexServer) 
+
+| Name     |      Default Value    |  Description |
+|:----------:|:-------------:|:------:       |
+| carbon.enable.index.server       |  false | Enable the use of index server 
for pruning for the
+whole application.       |
+| carbon.index.server.ip |    NA   |   Specify the IP/HOST on which the server 
would be started. Better to specify the private IP. | 
+| carbon.index.server.port | NA | The port on which the index server has to be 
started. |
+|carbon.index.server.max.worker.threads| 500 | Number of RPC handlers to open 
for accepting the
+requests from JDBC driver. Max accepted value is Integer.Max. |
+|carbon.max.executor.lru.cache.size|  NA | Used to specify the max size for 
executor LRU cache. Mandatory to set fo the user. |
+|carbon.index.server.max.jobname.length|NA|The max length of the job to show 
in the index server application UI. For bigger queries this may impact 
performance as the whole string would be sent from JDBCServer to IndexServer.|
+|carbon.max.executor.threads.for.block.pruning|4| max executor threads used 
for block pruning. |
+|carbon.index.server.inmemory.serialization.threshold.inKB|300|Max in memory 
serialization size
+after reaching threshold data will be written to file. Min value that the user 
can set is 0KB and
+max is 102400KB. |
+|carbon.indexserver.temp.path|tablePath|will be used to write split serialize 
data when in memory
+threashold crosses the limit|
+
+
 
 Review comment:
   added

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to