[CARBONDATA-2915] Reformat Documentation of CarbonData

1.Split Our carbondata command into DDL and DML,2.Add Presto integration along 
with Spark into quick start,3.Add a master reference manual which lists all the 
commands supported in carbondata.This manual shall have links to DDL and DML 
supported, 4.Add a introduction to carbondata covering architecture,design and 
features supported, 5.Merge FAQ and troubleshooting documents into single 
document , 6.Add a separate md file to explain user how to navigate across our 
documentation, 7.Add the TOC (Table of Contents) to all the md files which has 
multiple sections , 8.Add list of supported properties at the beginning of each 
DDL or DML so that user knows all the properties that are supported, 9.Rewrite 
the configuration properties description to explain the property in bit more 
detail and also highlight when to use the command and any caveats, 10.ReOrder 
our configuration properties table to group features wise, 11.Change the 
grammar and sentences.

This closes #2693


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6e50c1c6
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6e50c1c6
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/6e50c1c6

Branch: refs/heads/master
Commit: 6e50c1c6fc1d6e82a4faf6dc6e0824299786ccc0
Parents: 3894e1d
Author: Raghunandan S <carbondatacontributi...@gmail.com>
Authored: Mon Aug 27 19:15:42 2018 +0800
Committer: chenliang613 <chenliang...@huawei.com>
Committed: Fri Sep 7 23:53:19 2018 +0800

----------------------------------------------------------------------
 docs/How-to-contribute-to-Apache-CarbonData.md |  192 ---
 docs/configuration-parameters.md               |    8 +-
 docs/data-management-on-carbondata.md          | 1402 -------------------
 docs/datamap-developer-guide.md                |   13 +-
 docs/datamap/bloomfilter-datamap-guide.md      |    5 +-
 docs/datamap/datamap-management.md             |   17 +-
 docs/datamap/lucene-datamap-guide.md           |    4 +-
 docs/datamap/preaggregate-datamap-guide.md     |    7 +-
 docs/datamap/timeseries-datamap-guide.md       |   38 +-
 docs/ddl-of-carbondata.md                      |  957 +++++++++++++
 docs/dml-of-carbondata.md                      |  469 +++++++
 docs/documentation.md                          |   66 +
 docs/faq.md                                    |  283 +++-
 docs/file-structure-of-carbondata.md           |  178 ++-
 docs/hive-guide.md                             |  100 ++
 docs/how-to-contribute-to-apache-carbondata.md |  192 +++
 docs/images/2-1_1.png                          |  Bin 0 -> 91864 bytes
 docs/images/2-2_1.png                          |  Bin 0 -> 103559 bytes
 docs/images/2-3_1.png                          |  Bin 0 -> 18316 bytes
 docs/images/2-3_2.png                          |  Bin 0 -> 33150 bytes
 docs/images/2-3_3.png                          |  Bin 0 -> 42102 bytes
 docs/images/2-3_4.png                          |  Bin 0 -> 81039 bytes
 docs/images/2-4_1.png                          |  Bin 0 -> 11212 bytes
 docs/images/2-5_1.png                          |  Bin 0 -> 6386 bytes
 docs/images/2-5_2.png                          |  Bin 0 -> 16141 bytes
 docs/images/2-5_3.png                          |  Bin 0 -> 9395 bytes
 docs/images/2-6_1.png                          |  Bin 0 -> 17016 bytes
 docs/images/carbondata-performance.png         |  Bin 0 -> 375287 bytes
 docs/introduction.md                           |  117 ++
 docs/language-manual.md                        |   39 +
 docs/performance-tuning.md                     |  246 ++++
 docs/quick-start-guide.md                      |  378 ++++-
 docs/s3-guide.md                               |    5 +-
 docs/sdk-guide.md                              |   15 +-
 docs/segment-management-on-carbondata.md       |  142 ++
 docs/streaming-guide.md                        |  172 ++-
 docs/supported-data-types-in-carbondata.md     |    5 +-
 docs/troubleshooting.md                        |  267 ----
 docs/usecases.md                               |  215 +++
 docs/useful-tips-on-carbondata.md              |  177 ---
 integration/hive/hive-guide.md                 |  100 --
 41 files changed, 3578 insertions(+), 2231 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/6e50c1c6/docs/How-to-contribute-to-Apache-CarbonData.md
----------------------------------------------------------------------
diff --git a/docs/How-to-contribute-to-Apache-CarbonData.md 
b/docs/How-to-contribute-to-Apache-CarbonData.md
deleted file mode 100644
index 8cda54a..0000000
--- a/docs/How-to-contribute-to-Apache-CarbonData.md
+++ /dev/null
@@ -1,192 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more 
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership. 
-    The ASF licenses this file to you under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with 
-    the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software 
-    distributed under the License is distributed on an "AS IS" BASIS, 
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and 
-    limitations under the License.
--->
-
-# How to contribute to Apache CarbonData
-
-The Apache CarbonData community welcomes all kinds of contributions from 
anyone with a passion for
-faster data format! Apache CarbonData is a new file format for faster 
interactive query using
-advanced columnar storage, index, compression and encoding techniques to 
improve computing
-efficiency,in turn it will help speedup queries an order of magnitude faster 
over PetaBytes of data.
-
-We use a review-then-commit workflow in CarbonData for all contributions.
-
-* Engage -> Design -> Code -> Review -> Commit
-
-## Engage
-
-### Mailing list(s)
-
-We discuss design and implementation issues on d...@carbondata.apache.org Join 
by
-emailing dev-subscr...@carbondata.apache.org
-
-### Apache JIRA
-
-We use [Apache JIRA](https://issues.apache.org/jira/browse/CARBONDATA) as an 
issue tracking and
-project management tool, as well as a way to communicate among a very diverse 
and distributed set
-of contributors. To be able to gather feedback, avoid frustration, and avoid 
duplicated efforts all
-CarbonData-related work should be tracked there.
-
-If you do not already have an Apache JIRA account, sign up 
[here](https://issues.apache.org/jira/).
-
-If a quick search doesn’t turn up an existing JIRA issue for the work you 
want to contribute,
-create it. Please discuss your proposal with a committer or the component lead 
in JIRA or,
-alternatively, on the developer mailing list(d...@carbondata.apache.org).
-
-If there’s an existing JIRA issue for your intended contribution, please 
comment about your
-intended work. Once the work is understood, a committer will assign the issue 
to you.
-(If you don’t have a JIRA role yet, you’ll be added to the 
“contributor” role.) If an issue is
-currently assigned, please check with the current assignee before reassigning.
-
-For moderate or large contributions, you should not start coding or writing a 
design doc unless
-there is a corresponding JIRA issue assigned to you for that work. Simple 
changes,
-like fixing typos, do not require an associated issue.
-
-### Design
-
-To clearly express your thoughts and get early feedback from other community 
members, we encourage you to clearly scope, document the design of non-trivial 
contributions and discuss with the CarbonData community before you start coding.
-
-Generally, the JIRA issue is the best place to gather relevant design docs, 
comments, or references. It’s great to explicitly include relevant 
stakeholders early in the conversation. For designs that may be generally 
interesting, we also encourage conversations on the developer’s mailing list.
-
-### Code
-
-We use GitHub’s pull request functionality to review proposed code changes.
-If you do not already have a personal GitHub account, sign up 
[here](https://github.com).
-
-### Git config
-
-Ensure to finish the below config(user.email, user.name) before starting PR 
works.
-```
-$ git config --global user.email "y...@example.com"
-$ git config --global user.name "Your Name"
-```
-
-#### Fork the repository on GitHub
-
-Go to the [Apache CarbonData GitHub 
mirror](https://github.com/apache/carbondata) and
-fork the repository to your account.
-This will be your private workspace for staging changes.
-
-#### Clone the repository locally
-
-You are now ready to create the development environment on your local machine.
-Clone CarbonData’s read-only GitHub mirror.
-```
-$ git clone https://github.com/apache/carbondata.git
-$ cd carbondata
-```
-Add your forked repository as an additional Git remote, where you’ll push 
your changes.
-```
-$ git remote add <GitHub_user> https://github.com/<GitHub_user>/carbondata.git
-```
-You are now ready to start developing!
-
-#### Create a branch in your fork
-
-You’ll work on your contribution in a branch in your own (forked) 
repository. Create a local branch,
-initialized with the state of the branch you expect your changes to be merged 
into.
-Keep in mind that we use several branches, including master, feature-specific, 
and
-release-specific branches. If you are unsure, initialize with the state of the 
master branch.
-```
-$ git fetch --all
-$ git checkout -b <my-branch> origin/master
-```
-At this point, you can start making and committing changes to this branch in a 
standard way.
-
-#### Syncing and pushing your branch
-
-Periodically while you work, and certainly before submitting a pull request, 
you should update
-your branch with the most recent changes to the target branch.
-```
-$ git pull --rebase
-```
-Remember to always use --rebase parameter to avoid extraneous merge commits.
-
-To push your local, committed changes to your (forked) repository on GitHub, 
run:
-```
-$ git push <GitHub_user> <my-branch>
-```
-#### Testing
-
-All code should have appropriate unit testing coverage. New code should have 
new tests in the
-same contribution. Bug fixes should include a regression test to prevent the 
issue from reoccurring.
-
-For contributions to the Java code, run unit tests locally via Maven.
-```
-$ mvn clean verify
-```
-
-### Review
-
-Once the initial code is complete and the tests pass, it’s time to start the 
code review process.
-We review and discuss all code, no matter who authors it. It’s a great way 
to build community,
-since you can learn from other developers, and they become familiar with your 
contribution.
-It also builds a strong project by encouraging a high quality bar and keeping 
code consistent
-throughout the project.
-
-#### Create a pull request
-
-Organize your commits to make your reviewer’s job easier. Use the following 
command to
-re-order, squash, edit, or change description of individual commits.
-```
-$ git rebase -i origin/master
-```
-Navigate to the CarbonData GitHub mirror to create a pull request. The title 
of the pull request
-should be strictly in the following format:
-```
-[CARBONDATA-JiraTicketNumer][FeatureName] Description of pull request    
-```
-Please include a descriptive pull request message to help make the 
reviewer’s job easier:
-```
- - The root cause/problem statement
- - What is the implemented solution
- ```
-
-If you know a good committer to review your pull request, please make a 
comment like the following.
-If not, don’t worry, a committer will pick it up.
-```
-Hi @<committer/reviewer name>, can you please take a look?
-```
-
-#### Code Review and Revision
-
-During the code review process, don’t rebase your branch or otherwise modify 
published commits,
-since this can remove existing comment history and be confusing to the 
reviewer,
-When you make a revision, always push it in a new commit.
-
-Our GitHub mirror automatically provides pre-commit testing coverage using 
Jenkins.
-Please make sure those tests pass,the contribution cannot be merged otherwise.
-
-#### LGTM
-Once the reviewer is happy with the change, they’ll respond with an LGTM 
(“looks good to me!”).
-At this point, the committer will take over, possibly make some additional 
touch ups,
-and merge your changes into the codebase.
-
-In the case both the author and the reviewer are committers, either can merge 
the pull request.
-Just be sure to communicate clearly whose responsibility it is in this 
particular case.
-
-Thank you for your contribution to Apache CarbonData!
-
-#### Deleting your branch(optional)
-Once the pull request is merged into the Apache CarbonData repository, you can 
safely delete the
-branch locally and purge it from your forked repository.
-
-From another local branch, run:
-```
-$ git fetch --all
-$ git branch -d <my-branch>
-$ git push <GitHub_user> --delete <my-branch>
-```

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6e50c1c6/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 9eca358..c8c74f2 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -16,7 +16,7 @@
 -->
 
 # Configuring CarbonData
- This guide explains the configurations that can be used to tune CarbonData to 
achieve better performance.Some of the properties can be set dynamically and 
are explained in the section Dynamic Configuration In CarbonData Using 
SET-RESET.Most of the properties that control the internal settings have 
reasonable default values.They are listed along with the properties along with 
explanation.
+ This guide explains the configurations that can be used to tune CarbonData to 
achieve better performance.Most of the properties that control the internal 
settings have reasonable default values.They are listed along with the 
properties along with explanation.
 
  * [System Configuration](#system-configuration)
  * [Data Loading Configuration](#data-loading-configuration)
@@ -59,7 +59,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.bad.records.action | FAIL | CarbonData in addition to identifying the 
bad records, can take certain actions on such data.This configuration can have 
four types of actions for bad records namely FORCE, REDIRECT, IGNORE and FAIL. 
If set to FORCE then it auto-corrects the data by storing the bad records as 
NULL. If set to REDIRECT then bad records are written to the raw CSV instead of 
being loaded. If set to IGNORE then bad records are neither loaded nor written 
to the raw CSV. If set to FAIL then data loading fails if any bad records are 
found. |
 | carbon.options.is.empty.data.bad.record | false | Based on the business 
scenarios, empty("" or '' or ,,) data can be valid or invalid. This 
configuration controls how empty data should be treated by CarbonData. If 
false, then empty ("" or '' or ,,) data will not be considered as bad record 
and vice versa. |
 | carbon.options.bad.record.path | (none) | Specifies the HDFS path where bad 
records are to be stored. By default the value is Null. This path must to be 
configured by the user if ***carbon.options.bad.records.logger.enable*** is 
**true** or ***carbon.bad.records.action*** is **REDIRECT**. |
-| carbon.blockletgroup.size.in.mb | 64 | Please refer to 
[file-structure-of-carbondata](../file-structure-of-carbondata.md ) to 
understand the storage format of CarbonData.The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of each blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB).**NOTE:** Configuring a higher value might lead to poor 
performance as an entire blocklet group will have to read into memory before 
processing.For filter queries with limit, it is **not advisable** to have a 
bigger blocklet size.For Aggregation queries which need to return more number 
of rows,bigger blocklet size is advisable. |
+| carbon.blockletgroup.size.in.mb | 64 | Please refer to 
[file-structure-of-carbondata](./file-structure-of-carbondata.md#carbondata-file-format)
 to understand the storage format of CarbonData.The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of each blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB).**NOTE:** Configuring a higher value might lead to poor 
performance as an entire blocklet group will have to read into memory before 
processing.For filter queries with limit, it is **not advisable** to have a 
bigger blocklet size.For Aggregation queries which need to return more number 
of rows,bigger blocklet size is advisable. |
 | carbon.sort.file.write.buffer.size | 16384 | CarbonData sorts and writes 
data to intermediate files to limit the memory usage.This configuration 
determines the buffer size to be used for reading and writing such files. 
**NOTE:** This configuration is useful to tune IO and derive optimal 
performance.Based on the OS and underlying harddisk type, these values can 
significantly affect the overall performance.It is ideal to tune the buffersize 
equivalent to the IO buffer size of the OS.Recommended range is between 10240 
to 10485760 bytes. |
 | carbon.sort.intermediate.files.limit | 20 | CarbonData sorts and writes data 
to intermediate files to limit the memory usage.Before writing the target 
carbondat file, the data in these intermediate files needs to be sorted again 
so as to ensure the entire data in the data load is sorted.This configuration 
determines the minimum number of intermediate files after which merged sort is 
applied on them sort the data.**NOTE:** Intermediate merging happens on a 
separate thread in the background.Number of threads used is determined by 
***carbon.merge.sort.reader.thread***.Configuring a low value will cause more 
time to be spent in merging these intermediate merged files which can cause 
more IO.Configuring a high value would cause not to use the idle threads to do 
intermediate sort merges.Range of recommended values are between 2 and 50 |
 | carbon.csv.read.buffersize.byte | 1048576 | CarbonData uses Hadoop 
InputFormat to read the csv files.This configuration value is used to pass 
buffer size as input for the Hadoop MR job when reading the csv files.This 
value is configured in bytes.**NOTE:** Refer to 
***org.apache.hadoop.mapreduce.InputFormat*** documentation for additional 
information. |
@@ -70,7 +70,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.enable.calculate.size | true | **For Load Operation**: Setting this 
property calculates the size of the carbon data file (.carbondata) and carbon 
index file (.carbonindex) for every load and updates the table status file. 
**For Describe Formatted**: Setting this property calculates the total size of 
the carbon data files and carbon index files for the respective table and 
displays in describe formatted command.**NOTE:** This is useful to determine 
the overall size of the carbondata table and also get an idea of how the table 
is growing in order to take up other backup strategy decisions. |
 | carbon.cutOffTimestamp | (none) | CarbonData has capability to generate the 
Dictionary values for the timestamp columns from the data itself without the 
need to store the computed dictionary values. This configuration sets the start 
date for calculating the timestamp. Java counts the number of milliseconds from 
start of "1970-01-01 00:00:00". This property is used to customize the start of 
position. For example "2000-01-01 00:00:00". **NOTE:** The date must be in the 
form ***carbon.timestamp.format***. CarbonData supports storing data for upto 
68 years.For example, if the cut-off time is 1970-01-01 05:30:00, then data 
upto 2038-01-01 05:30:00 will be supported by CarbonData. |
 | carbon.timegranularity | SECOND | The configuration is used to specify the 
data granularity level such as DAY, HOUR, MINUTE, or SECOND.This helps to store 
more than 68 years of data into CarbonData. |
-| carbon.use.local.dir | false | CarbonData during data loading, writes files 
to local temp directories before copying the files to HDFS.This configuration 
is used to specify whether CarbonData can write locally to tmp directory of the 
container or to the YARN application directory. |
+| carbon.use.local.dir | false | CarbonData,during data loading, writes files 
to local temp directories before copying the files to HDFS.This configuration 
is used to specify whether CarbonData can write locally to tmp directory of the 
container or to the YARN application directory. |
 | carbon.use.multiple.temp.dir | false | When multiple disks are present in 
the system, YARN is generally configured with multiple disks to be used as temp 
directories for managing the containers.This configuration specifies whether to 
use multiple YARN local directories during data loading for disk IO load 
balancing.Enable ***carbon.use.local.dir*** for this configuration to take 
effect.**NOTE:** Data Loading is an IO intensive operation whose performance 
can be limited by the disk IO threshold, particularly during multi table 
concurrent data load.Configuring this parameter, balances the disk IO across 
multiple disks there by improving the over all load performance. |
 | carbon.sort.temp.compressor | (none) | CarbonData writes every 
***carbon.sort.size*** number of records to intermediate temp files during data 
loading to ensure memory footprint is within limits.These temporary files cab 
be compressed and written in order to save the storage space.This configuration 
specifies the name of compressor to be used to compress the intermediate sort 
temp files during sort procedure in data loading.The valid values are 
'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that 
Carbondata will not compress the sort temp files.**NOTE:** Compressor will be 
useful if you encounter disk bottleneck.Since the data needs to be compressed 
and decompressed,it involves additional CPU cycles,but is compensated by the 
high IO throughput due to less data to be written or read from the disks. |
 | carbon.load.skewedDataOptimization.enabled | false | During data 
loading,CarbonData would divide the number of blocks equally so as to ensure 
all executors process same number of blocks.This mechanism satisfies most of 
the scenarios and ensures maximum parallel processing for optimal data loading 
performance.In some business scenarios, there might be scenarios where the size 
of blocks vary significantly and hence some executors would have to do more 
work if they get blocks containing more data. This configuration enables size 
based block allocation strategy for data loading.When loading, carbondata will 
use file size based block allocation strategy for task distribution. It will 
make sure that all the executors process the same size of data.**NOTE:** This 
configuration is useful if the size of your input data files varies widely, say 
1MB~1GB.For this configuration to work effectively,knowing the data pattern and 
size is important and necessary. |
@@ -107,7 +107,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.numberof.preserve.segments | 0 | If the user wants to preserve some 
number of segments from being compacted then he can set this configuration. 
Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will 
always be excluded from the compaction. No segments will be preserved by 
default.**NOTE:** This configuration is useful when the chances of input data 
can be wrong due to environment scenarios.Preserving some of the latest 
segments from being compacted can help to easily delete the wrongly loaded 
segments.Once compacted,it becomes more difficult to determine the exact data 
to be deleted(except when data is incrementing according to time) |
 | carbon.allowed.compaction.days | 0 | This configuration is used to control 
on the number of recent segments that needs to be compacted, ignoring the older 
ones.This congifuration is in days.For Example: If the configuration is 2, then 
the segments which are loaded in the time frame of past 2 days only will get 
merged. Segments which are loaded earlier than 2 days will not be merged. This 
configuration is disabled by default.**NOTE:** This configuration is useful 
when a bulk of history data is loaded into the carbondata.Query on this data is 
less frequent.In such cases involving these segments also into compacation will 
affect the resource consumption, increases overall compaction time. |
 | carbon.enable.auto.load.merge | false | Compaction can be automatically 
triggered once data load completes.This ensures that the segments are merged in 
time and thus query times doesnt increase with increase in segments.This 
configuration enables to do compaction along with data loading.**NOTE: 
**Compaction will be triggered once the data load completes.But the status of 
data load wait till the compaction is completed.Hence it might look like data 
loading time has increased, but thats not the case.Moreover failure of 
compaction will not affect the data loading status.If data load had completed 
successfully, the status would be updated and segments are committed.However, 
failure while data loading, will not trigger compaction and error is returned 
immediately. |
-| carbon.enable.page.level.reader.in.compaction|true|Enabling page level 
reader for compaction reduces the memory usage while compacting more number of 
segments. It allows reading only page by page instead of reading whole blocklet 
to memory.**NOTE:** Please refer to 
[file-structure-of-carbondata](../file-structure-of-carbondata.md ) to 
understand the storage format of CarbonData and concepts of pages.|
+| carbon.enable.page.level.reader.in.compaction|true|Enabling page level 
reader for compaction reduces the memory usage while compacting more number of 
segments. It allows reading only page by page instead of reading whole blocklet 
to memory.**NOTE:** Please refer to 
[file-structure-of-carbondata](./file-structure-of-carbondata.md#carbondata-file-format)
 to understand the storage format of CarbonData and concepts of pages.|
 | carbon.concurrent.compaction | true | Compaction of different tables can be 
executed concurrently.This configuration determines whether to compact all 
qualifying tables in parallel or not.**NOTE: **Compacting concurrently is a 
resource demanding operation and needs more resouces there by affecting the 
query performance also.This configuration is **deprecated** and might be 
removed in future releases. |
 | carbon.compaction.prefetch.enable | false | Compaction operation is similar 
to Query + data load where in data from qualifying segments are queried and 
data loading performed to generate a new single segment.This configuration 
determines whether to query ahead data from segments and feed it for data 
loading.**NOTE: **This configuration is disabled by default as it needs extra 
resources for querying ahead extra data.Based on the memory availability on the 
cluster, user can enable it to improve compaction performance. |
 | carbon.merge.index.in.segment | true | Each CarbonData file has a companion 
CarbonIndex file which maintains the metadata about the data.These CarbonIndex 
files are read and loaded into driver and is used subsequently for pruning of 
data during queries.These CarbonIndex files are very small in size(few KB) and 
are many.Reading many small files from HDFS is not efficient and leads to slow 
IO performance.Hence these CarbonIndex files belonging to a segment can be 
combined into  a single file and read once there by increasing the IO 
throughput.This configuration enables to merge all the CarbonIndex files into a 
single MergeIndex file upon data loading completion.**NOTE:** Reading a single 
big file is more efficient in HDFS and IO throughput is very high.Due to this 
the time needed to load the index files into memory when query is received for 
the first time on that table is significantly reduced and there by 
significantly reduces the delay in serving the first query. |

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6e50c1c6/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
deleted file mode 100644
index 2cde334..0000000
--- a/docs/data-management-on-carbondata.md
+++ /dev/null
@@ -1,1402 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more 
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership. 
-    The ASF licenses this file to you under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with 
-    the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software 
-    distributed under the License is distributed on an "AS IS" BASIS, 
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and 
-    limitations under the License.
--->
-
-# Data Management on CarbonData
-
-This tutorial is going to introduce all commands and data operations on 
CarbonData.
-
-* [CREATE TABLE](#create-table)
-* [CREATE DATABASE](#create-database)
-* [TABLE MANAGEMENT](#table-management)
-* [LOAD DATA](#load-data)
-* [UPDATE AND DELETE](#update-and-delete)
-* [COMPACTION](#compaction)
-* [PARTITION](#partition)
-* [BUCKETING](#bucketing)
-* [SEGMENT MANAGEMENT](#segment-management)
-
-## CREATE TABLE
-
-  This command can be used to create a CarbonData table by specifying the list 
of fields along with the table properties. You can also specify the location 
where the table needs to be stored.
-  
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type , ...)]
-  STORED AS carbondata
-  [TBLPROPERTIES (property_name=property_value, ...)]
-  [LOCATION 'path']
-  ```
-  **NOTE:** CarbonData also supports "STORED AS carbondata" and "USING 
carbondata". Find example code at 
[CarbonSessionExample](https://github.com/apache/carbondata/blob/master/examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonSessionExample.scala)
 in the CarbonData repo.
-### Usage Guidelines
-
-  Following are the guidelines for TBLPROPERTIES, CarbonData's additional 
table options can be set via carbon.properties.
-  
-   - **Dictionary Encoding Configuration**
-
-     Dictionary encoding is turned off for all columns by default from 1.3 
onwards, you can use this command for including or excluding columns to do 
dictionary encoding.
-     Suggested use cases : do dictionary encoding for low cardinality columns, 
it might help to improve data compression ratio and performance.
-
-     ```
-     TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
-        ```
-        NOTE: Dictionary Include/Exclude for complex child columns is not 
supported.
-        
-   - **Inverted Index Configuration**
-
-     By default inverted index is enabled, it might help to improve 
compression ratio and query speed, especially for low cardinality columns which 
are in reward position.
-     Suggested use cases : For high cardinality columns, you can disable the 
inverted index for improving the data loading performance.
-
-     ```
-     TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
-     ```
-
-   - **Sort Columns Configuration**
-
-     This property is for users to specify which columns belong to the 
MDK(Multi-Dimensions-Key) index.
-     * If users don't specify "SORT_COLUMN" property, by default MDK index be 
built by using all dimension columns except complex data type column. 
-     * If this property is specified but with empty argument, then the table 
will be loaded without sort.
-        * This supports only string, date, timestamp, short, int, long, and 
boolean data types.
-     Suggested use cases : Only build MDK index for required columns,it might 
help to improve the data loading performance.
-
-     ```
-     TBLPROPERTIES ('SORT_COLUMNS'='column1, column3')
-     OR
-     TBLPROPERTIES ('SORT_COLUMNS'='')
-     ```
-     NOTE: Sort_Columns for Complex datatype columns is not supported.
-
-   - **Sort Scope Configuration**
-   
-     This property is for users to specify the scope of the sort during data 
load, following are the types of sort scope.
-     
-     * LOCAL_SORT: It is the default sort scope.             
-     * NO_SORT: It will load the data in unsorted manner, it will 
significantly increase load performance.       
-     * BATCH_SORT: It increases the load performance but decreases the query 
performance if identified blocks > parallelism.
-     * GLOBAL_SORT: It increases the query performance, especially high 
concurrent point query.
-       And if you care about loading resources isolation strictly, because the 
system uses the spark GroupBy to sort data, the resource can be controlled by 
spark. 
-        
-       ### Example:
-
-   ```
-    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                   productNumber INT,
-                                   productName STRING,
-                                   storeCity STRING,
-                                   storeProvince STRING,
-                                   productCategory STRING,
-                                   productBatch STRING,
-                                   saleQuantity INT,
-                                   revenue INT)
-    STORED BY 'carbondata'
-    TBLPROPERTIES ('SORT_COLUMNS'='productName,storeCity',
-                   'SORT_SCOPE'='NO_SORT')
-   ```
-   
-   **NOTE:** CarbonData also supports "using carbondata". Find example code at 
[SparkSessionExample](https://github.com/apache/carbondata/blob/master/examples/spark2/src/main/scala/org/apache/carbondata/examples/SparkSessionExample.scala)
 in the CarbonData repo.
- 
-   - **Table Block Size Configuration**
-
-     This property is for setting block size of this table, the default value 
is 1024 MB and supports a range of 1 MB to 2048 MB.
-
-     ```
-     TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
-     ```
-     **NOTE:** 512 or 512M both are accepted.
-
-   - **Table Blocklet Size Configuration**
-
-     This property is for setting blocklet size of this table, the default 
value is 64 MB.
-
-     ```
-     TBLPROPERTIES ('TABLE_BLOCKLET_SIZE'='32')
-     ```
-
-   - **Table Compaction Configuration**
-   
-     These properties are table level compaction configurations, if not 
specified, system level configurations in carbon.properties will be used.
-     Following are 5 configurations:
-     
-     * MAJOR_COMPACTION_SIZE: same meaning as carbon.major.compaction.size, 
size in MB.
-     * AUTO_LOAD_MERGE: same meaning as carbon.enable.auto.load.merge.
-     * COMPACTION_LEVEL_THRESHOLD: same meaning as 
carbon.compaction.level.threshold.
-     * COMPACTION_PRESERVE_SEGMENTS: same meaning as 
carbon.numberof.preserve.segments.
-     * ALLOWED_COMPACTION_DAYS: same meaning as 
carbon.allowed.compaction.days.     
-
-     ```
-     TBLPROPERTIES ('MAJOR_COMPACTION_SIZE'='2048',
-                    'AUTO_LOAD_MERGE'='true',
-                    'COMPACTION_LEVEL_THRESHOLD'='5,6',
-                    'COMPACTION_PRESERVE_SEGMENTS'='10',
-                    'ALLOWED_COMPACTION_DAYS'='5')
-     ```
-     
-   - **Streaming**
-
-     CarbonData supports streaming ingestion for real-time data. You can 
create the ‘streaming’ table using the following table properties.
-
-     ```
-     TBLPROPERTIES ('streaming'='true')
-     ```
-
-   - **Local Dictionary Configuration**
-   
-   Columns for which dictionary is not generated needs more storage space and 
in turn more IO. Also since more data will have to be read during query, query 
performance also would suffer.Generating dictionary per blocklet for such 
columns would help in saving storage space and assist in improving query 
performance as carbondata is optimized for handling dictionary encoded columns 
more effectively.Generating dictionary internally per blocklet is termed as 
local dictionary. Please refer to [File structure of 
Carbondata](../file-structure-of-carbondata.md) for understanding about the 
file structure of carbondata and meaning of terms like blocklet.
-   
-   Local Dictionary helps in:
-   1. Getting more compression.
-   2. Filter queries and full scan queries will be faster as filter will be 
done on encoded data.
-   3. Reducing the store size and memory footprint as only unique values will 
be stored as part of local dictionary and corresponding data will be stored as 
encoded data.
-   4. Getting higher IO throughput.
- 
-   **NOTE:** 
-   
-   * Following Data Types are Supported for Local Dictionary:
-      * STRING
-      * VARCHAR
-      * CHAR
-
-   * Following Data Types are not Supported for Local Dictionary: 
-      * SMALLINT
-      * INTEGER
-      * BIGINT
-      * DOUBLE
-      * DECIMAL
-      * TIMESTAMP
-      * DATE
-      * BOOLEAN
-   
-   * In case of multi-level complex dataType columns, primitive 
string/varchar/char columns are considered for local dictionary generation.
-   
-   Local dictionary will have to be enabled explicitly during create table or 
by enabling the system property 'carbon.local.dictionary.enable'. By default, 
Local Dictionary will be disabled for the carbondata table.
-    
-   Local Dictionary can be configured using the following properties during 
create table command: 
-          
-   | Properties | Default value | Description |
-   | ---------- | ------------- | ----------- |
-   | LOCAL_DICTIONARY_ENABLE | false | Whether to enable local dictionary 
generation. **NOTE:** If this property is defined, it will override the value 
configured at system level by 'carbon.local.dictionary.enable' |
-   | LOCAL_DICTIONARY_THRESHOLD | 10000 | The maximum cardinality of a column 
upto which carbondata can try to generate local dictionary (maximum - 100000) |
-   | LOCAL_DICTIONARY_INCLUDE | string/varchar/char columns| Columns for which 
Local Dictionary has to be generated.**NOTE:** Those string/varchar/char 
columns which are added into DICTIONARY_INCLUDE option will not be considered 
for local dictionary generation.|
-   | LOCAL_DICTIONARY_EXCLUDE | none | Columns for which Local Dictionary need 
not be generated. |
-        
-   **Fallback behavior:** 
-   
-   * When the cardinality of a column exceeds the threshold, it triggers a 
fallback and the generated dictionary will be reverted and data loading will be 
continued without dictionary encoding.
-   
-   **NOTE:** When fallback is triggered, the data loading performance will 
decrease as encoded data will be discarded and the actual data is written to 
the temporary sort files.
-   
-   **Points to be noted:**
-      
-   1. Reduce Block size:
-   
-      Number of Blocks generated is less in case of Local Dictionary as 
compression ratio is high. This may reduce the number of tasks launched during 
query, resulting in degradation of query performance if the pruned blocks are 
less compared to the number of parallel tasks which can be run. So it is 
recommended to configure smaller block size which in turn generates more number 
of blocks.
-            
-   2. All the page-level data for a blocklet needs to be maintained in memory 
until all the pages encoded for local dictionary is processed in order to 
handle fallback. Hence the memory required for local dictionary based table is 
more and this memory increase is proportional to number of columns. 
-       
-### Example:
- 
-   ```
-   CREATE TABLE carbontable(
-             
-               column1 string,
-             
-               column2 string,
-             
-               column3 LONG )
-             
-     STORED BY 'carbondata'
-     
TBLPROPERTIES('LOCAL_DICTIONARY_ENABLE'='true','LOCAL_DICTIONARY_THRESHOLD'='1000',
-     'LOCAL_DICTIONARY_INCLUDE'='column1','LOCAL_DICTIONARY_EXCLUDE'='column2')
-   ```
-
-   **NOTE:** 
-   
-   * We recommend to use Local Dictionary when cardinality is high but is 
distributed across multiple loads
-   * On a large cluster, decoding data can become a bottleneck for global 
dictionary as there will be many remote reads. In this scenario, it is better 
to use Local Dictionary.
-   * When cardinality is less, but loads are repetitive, it is better to use 
global dictionary as local dictionary generates multiple dictionary files at 
blocklet level increasing redundancy.
-
-   - **Caching Min/Max Value for Required Columns**
-     By default, CarbonData caches min and max values of all the columns in 
schema.  As the load increases, the memory required to hold the min and max 
values increases considerably. This feature enables you to configure min and 
max values only for the required columns, resulting in optimized memory usage. 
-        
-        Following are the valid values for COLUMN_META_CACHE:
-        * If you want no column min/max values to be cached in the driver.
-        
-        ```
-        COLUMN_META_CACHE=’’
-        ```
-        
-        * If you want only col1 min/max values to be cached in the driver.
-        
-        ```
-        COLUMN_META_CACHE=’col1’
-        ```
-        
-        * If you want min/max values to be cached in driver for all the 
specified columns.
-        
-        ```
-        COLUMN_META_CACHE=’col1,col2,col3,…’
-        ```
-        
-        Columns to be cached can be specified either while creating table or 
after creation of the table.
-        During create table operation; specify the columns to be cached in 
table properties.
-        
-        Syntax:
-        
-        ```
-        CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 
int,…) STORED BY ‘carbondata’ TBLPROPERTIES 
(‘COLUMN_META_CACHE’=’col1,col2,…’)
-        ```
-        
-        Example:
-        
-        ```
-        CREATE TABLE employee (name String, city String, id int) STORED BY 
‘carbondata’ TBLPROPERTIES (‘COLUMN_META_CACHE’=’name’)
-        ```
-        
-        After creation of table or on already created tables use the alter 
table command to configure the columns to be cached.
-        
-        Syntax:
-        
-        ```
-        ALTER TABLE [dbName].tableName SET TBLPROPERTIES 
(‘COLUMN_META_CACHE’=’col1,col2,…’)
-        ```
-        
-        Example:
-        
-        ```
-        ALTER TABLE employee SET TBLPROPERTIES 
(‘COLUMN_META_CACHE’=’city’)
-        ```
-        
-   - **Caching at Block or Blocklet Level**
-
-     This feature allows you to maintain the cache at Block level, resulting 
in optimized usage of the memory. The memory consumption is high if the 
Blocklet level caching is maintained as a Block can have multiple Blocklet.
-        
-        Following are the valid values for CACHE_LEVEL:
-
-        *Configuration for caching in driver at Block level (default value).*
-        
-        ```
-        CACHE_LEVEL= ‘BLOCK’
-        ```
-        
-        *Configuration for caching in driver at Blocklet level.*
-        
-        ```
-        CACHE_LEVEL= ‘BLOCKLET’
-        ```
-        
-        Cache level can be specified either while creating table or after 
creation of the table.
-        During create table operation specify the cache level in table 
properties.
-        
-        Syntax:
-        
-        ```
-        CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 
int,…) STORED BY ‘carbondata’ TBLPROPERTIES 
(‘CACHE_LEVEL’=’Blocklet’)
-        ```
-        
-        Example:
-        
-        ```
-        CREATE TABLE employee (name String, city String, id int) STORED BY 
‘carbondata’ TBLPROPERTIES (‘CACHE_LEVEL’=’Blocklet’)
-        ```
-        
-        After creation of table or on already created tables use the alter 
table command to configure the cache level.
-        
-        Syntax:
-        
-        ```
-        ALTER TABLE [dbName].tableName SET TBLPROPERTIES 
(‘CACHE_LEVEL’=’Blocklet’)
-        ```
-        
-        Example:
-        
-        ```
-        ALTER TABLE employee SET TBLPROPERTIES 
(‘CACHE_LEVEL’=’Blocklet’)
-        ```
-
-    - **Support Flat folder same as Hive/Parquet**
-
-         This feature allows all carbondata and index files to keep directy 
under tablepath. Currently all carbondata/carbonindex files written under 
tablepath/Fact/Part0/Segment_NUM folder and it is not same as hive/parquet 
folder structure. This feature makes all files written will be directly under 
tablepath, it does not maintain any segment folder structure.This is useful for 
interoperability between the execution engines and plugin with other execution 
engines like hive or presto becomes easier.
-
-         Following table property enables this feature and default value is 
false.
-         ```
-          'flat_folder'='true'
-         ```
-         Example:
-         ```
-         CREATE TABLE employee (name String, city String, id int) STORED BY 
‘carbondata’ TBLPROPERTIES ('flat_folder'='true')
-         ```
-
-    - **String longer than 32000 characters**
-
-     In common scenarios, the length of string is less than 32000,
-     so carbondata stores the length of content using Short to reduce memory 
and space consumption.
-     To support string longer than 32000 characters, carbondata introduces a 
table property called `LONG_STRING_COLUMNS`.
-     For these columns, carbondata internally stores the length of content 
using Integer.
-
-     You can specify the columns as 'long string column' using below 
tblProperties:
-
-     ```
-     // specify col1, col2 as long string columns
-     TBLPROPERTIES ('LONG_STRING_COLUMNS'='col1,col2')
-     ```
-
-     Besides, you can also use this property through DataFrame by
-     ```
-     df.format("carbondata")
-       .option("tableName", "carbonTable")
-       .option("long_string_columns", "col1, col2")
-       .save()
-     ```
-
-     If you are using Carbon-SDK, you can specify the datatype of long string 
column as `varchar`.
-     You can refer to SDKwriterTestCase for example.
-
-     **NOTE:** The LONG_STRING_COLUMNS can only be string/char/varchar columns 
and cannot be dictionary_include/sort_columns/complex columns.
-
-## CREATE TABLE AS SELECT
-  This function allows user to create a Carbon table from any of the 
Parquet/Hive/Carbon table. This is beneficial when the user wants to create 
Carbon table from any other Parquet/Hive table and use the Carbon query engine 
to query and achieve better query results for cases where Carbon is faster than 
other file formats. Also this feature can be used for backing up the data.
-
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
-  STORED BY 'carbondata' 
-  [TBLPROPERTIES (key1=val1, key2=val2, ...)] 
-  AS select_statement;
-  ```
-
-### Examples
-  ```
-  carbon.sql("CREATE TABLE source_table(
-                             id INT,
-                             name STRING,
-                             city STRING,
-                             age INT)
-              STORED AS parquet")
-  carbon.sql("INSERT INTO source_table SELECT 1,'bob','shenzhen',27")
-  carbon.sql("INSERT INTO source_table SELECT 2,'david','shenzhen',31")
-  
-  carbon.sql("CREATE TABLE target_table
-              STORED BY 'carbondata'
-              AS SELECT city,avg(age) FROM source_table GROUP BY city")
-              
-  carbon.sql("SELECT * FROM target_table").show
-    // results:
-    //    +--------+--------+
-    //    |    city|avg(age)|
-    //    +--------+--------+
-    //    |shenzhen|    29.0|
-    //    +--------+--------+
-
-  ```
-
-## CREATE EXTERNAL TABLE
-  This function allows user to create external table by specifying location.
-  ```
-  CREATE EXTERNAL TABLE [IF NOT EXISTS] [db_name.]table_name 
-  STORED BY 'carbondata' LOCATION ‘$FilesPath’
-  ```
-  
-### Create external table on managed table data location.
-  Managed table data location provided will have both FACT and Metadata 
folder. 
-  This data can be generated by creating a normal carbon table and use this 
path as $FilesPath in the above syntax.
-  
-  **Example:**
-  ```
-  sql("CREATE TABLE origin(key INT, value STRING) STORED BY 'carbondata'")
-  sql("INSERT INTO origin select 100,'spark'")
-  sql("INSERT INTO origin select 200,'hive'")
-  // creates a table in $storeLocation/origin
-  
-  sql(s"""
-  |CREATE EXTERNAL TABLE source
-  |STORED BY 'carbondata'
-  |LOCATION '$storeLocation/origin'
-  """.stripMargin)
-  checkAnswer(sql("SELECT count(*) from source"), sql("SELECT count(*) from 
origin"))
-  ```
-  
-### Create external table on Non-Transactional table data location.
-  Non-Transactional table data location will have only carbondata and 
carbonindex files, there will not be a metadata folder (table status and 
schema).
-  Our SDK module currently support writing data in this format.
-  
-  **Example:**
-  ```
-  sql(
-  s"""CREATE EXTERNAL TABLE sdkOutputTable STORED BY 'carbondata' LOCATION
-  |'$writerPath' """.stripMargin)
-  ```
-  
-  Here writer path will have carbondata and index files.
-  This can be SDK output. Refer [SDK Writer 
Guide](https://github.com/apache/carbondata/blob/master/docs/sdk-writer-guide.md).
 
-  
-  **Note:**
-  1. Dropping of the external table should not delete the files present in the 
location.
-  2. When external table is created on non-transactional table data, 
-  external table will be registered with the schema of carbondata files.
-  If multiple files with different schema is present, exception will be thrown.
-  So, If table registered with one schema and files are of different schema, 
-  suggest to drop the external table and create again to register table with 
new schema.  
-
-
-## CREATE DATABASE 
-  This function creates a new database. By default the database is created in 
Carbon store location, but you can also specify custom location.
-  ```
-  CREATE DATABASE [IF NOT EXISTS] database_name [LOCATION path];
-  ```
-  
-### Example
-  ```
-  CREATE DATABASE carbon LOCATION “hdfs://name_cluster/dir1/carbonstore”;
-  ```
-
-## TABLE MANAGEMENT  
-
-### SHOW TABLE
-
-  This command can be used to list all the tables in current database or all 
the tables of a specific database.
-  ```
-  SHOW TABLES [IN db_Name]
-  ```
-
-  Example:
-  ```
-  SHOW TABLES
-  OR
-  SHOW TABLES IN defaultdb
-  ```
-
-### ALTER TABLE
-
-  The following section introduce the commands to modify the physical or 
logical state of the existing table(s).
-
-   - **RENAME TABLE**
-   
-     This command is used to rename the existing table.
-     ```
-     ALTER TABLE [db_name.]table_name RENAME TO new_table_name
-     ```
-
-     Examples:
-     ```
-     ALTER TABLE carbon RENAME TO carbonTable
-     OR
-     ALTER TABLE test_db.carbon RENAME TO test_db.carbonTable
-     ```
-
-   - **ADD COLUMNS**
-   
-     This command is used to add a new column to the existing table.
-     ```
-     ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...)
-     TBLPROPERTIES('DICTIONARY_INCLUDE'='col_name,...',
-     'DEFAULT.VALUE.COLUMN_NAME'='default_value')
-     ```
-
-     Examples:
-     ```
-     ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
-     ```
-
-     ```
-     ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) 
TBLPROPERTIES('DICTIONARY_INCLUDE'='a1')
-     ```
-
-     ```
-     ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) 
TBLPROPERTIES('DEFAULT.VALUE.a1'='10')
-     ```
-      NOTE: Add Complex datatype columns is not supported.
-
-Users can specify which columns to include and exclude for local dictionary 
generation after adding new columns. These will be appended with the already 
existing local dictionary include and exclude columns of main table 
respectively.
-  ```
-     ALTER TABLE carbon ADD COLUMNS (a1 STRING, b1 STRING) 
TBLPROPERTIES('LOCAL_DICTIONARY_INCLUDE'='a1','LOCAL_DICTIONARY_EXCLUDE'='b1')
-  ```
-
-   - **DROP COLUMNS**
-   
-     This command is used to delete the existing column(s) in a table.
-     ```
-     ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...)
-     ```
-
-     Examples:
-     ```
-     ALTER TABLE carbon DROP COLUMNS (b1)
-     OR
-     ALTER TABLE test_db.carbon DROP COLUMNS (b1)
-     
-     ALTER TABLE carbon DROP COLUMNS (c1,d1)
-     ```
-     NOTE: Drop Complex child column is not supported.
-
-   - **CHANGE DATA TYPE**
-   
-     This command is used to change the data type from INT to BIGINT or 
decimal precision from lower to higher.
-     Change of decimal data type from lower precision to higher precision will 
only be supported for cases where there is no data loss.
-     ```
-     ALTER TABLE [db_name.]table_name CHANGE col_name col_name 
changed_column_type
-     ```
-
-     Valid Scenarios
-     - Invalid scenario - Change of decimal precision from (10,2) to (10,5) is 
invalid as in this case only scale is increased but total number of digits 
remains the same.
-     - Valid scenario - Change of decimal precision from (10,2) to (12,3) is 
valid as the total number of digits are increased by 2 but scale is increased 
only by 1 which will not lead to any data loss.
-     - **NOTE:** The allowed range is 38,38 (precision, scale) and is a valid 
upper case scenario which is not resulting in data loss.
-
-     Example1:Changing data type of column a1 from INT to BIGINT.
-     ```
-     ALTER TABLE test_db.carbon CHANGE a1 a1 BIGINT
-     ```
-     
-     Example2:Changing decimal precision of column a1 from 10 to 18.
-     ```
-     ALTER TABLE test_db.carbon CHANGE a1 a1 DECIMAL(18,2)
-     ```
-- **MERGE INDEX**
-   
-     This command is used to merge all the CarbonData index files 
(.carbonindex) inside a segment to a single CarbonData index merge file 
(.carbonindexmerge). This enhances the first query performance.
-     ```
-      ALTER TABLE [db_name.]table_name COMPACT 'SEGMENT_INDEX'
-      ```
-      
-      Examples:
-      ```
-      ALTER TABLE test_db.carbon COMPACT 'SEGMENT_INDEX'
-      ```
-      **NOTE:**
-      * Merge index is not supported on streaming table.
-      
-- **SET and UNSET for Local Dictionary Properties**
-
-   When set command is used, all the newly set properties will override the 
corresponding old properties if exists.
-  
-   Example to SET Local Dictionary Properties:
-    ```
-   ALTER TABLE tablename SET 
TBLPROPERTIES('LOCAL_DICTIONARY_ENABLE'='false','LOCAL_DICTIONARY_THRESHOLD'='1000','LOCAL_DICTIONARY_INCLUDE'='column1','LOCAL_DICTIONARY_EXCLUDE'='column2')
-    ```
-   When Local Dictionary properties are unset, corresponding default values 
will be used for these properties.
-      
-   Example to UNSET Local Dictionary Properties:
-    ```
-   ALTER TABLE tablename UNSET 
TBLPROPERTIES('LOCAL_DICTIONARY_ENABLE','LOCAL_DICTIONARY_THRESHOLD','LOCAL_DICTIONARY_INCLUDE','LOCAL_DICTIONARY_EXCLUDE')
-    ```
-    
-   **NOTE:** For old tables, by default, local dictionary is disabled. If user 
wants local dictionary for these tables, user can enable/disable local 
dictionary for new data at their discretion. 
-   This can be achieved by using the alter table set command.
-
-### DROP TABLE
-  
-  This command is used to delete an existing table.
-  ```
-  DROP TABLE [IF EXISTS] [db_name.]table_name
-  ```
-
-  Example:
-  ```
-  DROP TABLE IF EXISTS productSchema.productSalesTable
-  ```
- 
-### REFRESH TABLE
- 
-  This command is used to register Carbon table to HIVE meta store catalogue 
from existing Carbon table data.
-  ```
-  REFRESH TABLE $db_NAME.$table_NAME
-  ```
-  
-  Example:
-  ```
-  REFRESH TABLE dbcarbon.productSalesTable
-  ```
-  
-  **NOTE:** 
-  * The new database name and the old database name should be same.
-  * Before executing this command the old table schema and data should be 
copied into the new database location.
-  * If the table is aggregate table, then all the aggregate tables should be 
copied to the new database location.
-  * For old store, the time zone of the source and destination cluster should 
be same.
-  * If old cluster used HIVE meta store to store schema, refresh will not work 
as schema file does not exist in file system.
-
-### Table and Column Comment
-
-  You can provide more information on table by using table comment. Similarly 
you can provide more information about a particular column using column 
comment. 
-  You can see the column comment of an existing table using describe formatted 
command.
-  
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type 
[COMMENT col_comment], ...)]
-    [COMMENT table_comment]
-  STORED BY 'carbondata'
-  [TBLPROPERTIES (property_name=property_value, ...)]
-  ```
-  
-  Example:
-  ```
-  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                productNumber Int COMMENT 'unique serial 
number for product')
-  COMMENT “This is table comment”
-   STORED BY 'carbondata'
-   TBLPROPERTIES ('DICTIONARY_INCLUDE'='productNumber')
-  ```
-  You can also SET and UNSET table comment using ALTER command.
-
-  Example to SET table comment:
-  
-  ```
-  ALTER TABLE carbon SET TBLPROPERTIES ('comment'='this table comment is 
modified');
-  ```
-  
-  Example to UNSET table comment:
-  
-  ```
-  ALTER TABLE carbon UNSET TBLPROPERTIES ('comment');
-  ```
-
-## LOAD DATA
-
-### LOAD FILES TO CARBONDATA TABLE
-  
-  This command is used to load csv files to carbondata, OPTIONS are not 
mandatory for data loading process. 
-  Inside OPTIONS user can provide any options like DELIMITER, QUOTECHAR, 
FILEHEADER, ESCAPECHAR, MULTILINE as per requirement.
-  
-  ```
-  LOAD DATA [LOCAL] INPATH 'folder_path' 
-  INTO TABLE [db_name.]table_name 
-  OPTIONS(property_name=property_value, ...)
-  ```
-
-  You can use the following options to load data:
-  
-  - **DELIMITER:** Delimiters can be provided in the load command.
-
-    ``` 
-    OPTIONS('DELIMITER'=',')
-    ```
-
-  - **QUOTECHAR:** Quote Characters can be provided in the load command.
-
-    ```
-    OPTIONS('QUOTECHAR'='"')
-    ```
-
-  - **COMMENTCHAR:** Comment Characters can be provided in the load command if 
user want to comment lines.
-
-    ```
-    OPTIONS('COMMENTCHAR'='#')
-    ```
-
-  - **HEADER:** When you load the CSV file without the file header and the 
file header is the same with the table schema, then add 'HEADER'='false' to 
load data SQL as user need not provide the file header. By default the value is 
'true'.
-  false: CSV file is without file header.
-  true: CSV file is with file header.
-  
-    ```
-    OPTIONS('HEADER'='false') 
-    ```
-
-       **NOTE:** If the HEADER option exist and is set to 'true', then the 
FILEHEADER option is not required.
-       
-  - **FILEHEADER:** Headers can be provided in the LOAD DATA command if 
headers are missing in the source files.
-
-    ```
-    OPTIONS('FILEHEADER'='column1,column2') 
-    ```
-
-  - **MULTILINE:** CSV with new line character in quotes.
-
-    ```
-    OPTIONS('MULTILINE'='true') 
-    ```
-
-  - **ESCAPECHAR:** Escape char can be provided if user want strict validation 
of escape character in CSV files.
-
-    ```
-    OPTIONS('ESCAPECHAR'='\') 
-    ```
-  - **SKIP_EMPTY_LINE:** This option will ignore the empty line in the CSV 
file during the data load.
-
-    ```
-    OPTIONS('SKIP_EMPTY_LINE'='TRUE/FALSE') 
-    ```
-
-  - **COMPLEX_DELIMITER_LEVEL_1:** Split the complex type data column in a row 
(eg., a$b$c --> Array = {a,b,c}).
-
-    ```
-    OPTIONS('COMPLEX_DELIMITER_LEVEL_1'='$') 
-    ```
-
-  - **COMPLEX_DELIMITER_LEVEL_2:** Split the complex type nested data column 
in a row. Applies level_1 delimiter & applies level_2 based on complex data 
type (eg., a:b$c:d --> Array> = {{a,b},{c,d}}).
-
-    ```
-    OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':')
-    ```
-
-  - **ALL_DICTIONARY_PATH:** All dictionary files path.
-
-    ```
-    OPTIONS('ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary')
-    ```
-
-  - **COLUMNDICT:** Dictionary file path for specified column.
-
-    ```
-    
OPTIONS('COLUMNDICT'='column1:dictionaryFilePath1,column2:dictionaryFilePath2')
-    ```
-    **NOTE:** ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.
-    
-  - **DATEFORMAT/TIMESTAMPFORMAT:** Date and Timestamp format for specified 
column.
-
-    ```
-    OPTIONS('DATEFORMAT' = 'yyyy-MM-dd','TIMESTAMPFORMAT'='yyyy-MM-dd 
HH:mm:ss')
-    ```
-    **NOTE:** Date formats are specified by date pattern strings. The date 
pattern letters in CarbonData are same as in JAVA. Refer to 
[SimpleDateFormat](http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html).
-
-  - **SORT COLUMN BOUNDS:** Range bounds for sort columns.
-
-    Suppose the table is created with 'SORT_COLUMNS'='name,id' and the range 
for name is aaa~zzz, the value range for id is 0~1000. Then during data 
loading, we can specify the following option to enhance data loading 
performance.
-    ```
-    OPTIONS('SORT_COLUMN_BOUNDS'='f,250;l,500;r,750')
-    ```
-    Each bound is separated by ';' and each field value in bound is separated 
by ','. In the example above, we provide 3 bounds to distribute records to 4 
partitions. The values 'f','l','r' can evenly distribute the records. Inside 
carbondata, for a record we compare the value of sort columns with that of the 
bounds and decide which partition the record will be forwarded to.
-
-    **NOTE:**
-    * SORT_COLUMN_BOUNDS will be used only when the SORT_SCOPE is 'local_sort'.
-    * Carbondata will use these bounds as ranges to process data concurrently 
during the final sort percedure. The records will be sorted and written out 
inside each partition. Since the partition is sorted, all records will be 
sorted.
-    * Since the actual order and literal order of the dictionary column are 
not necessarily the same, we do not recommend you to use this feature if the 
first sort column is 'dictionary_include'.
-    * The option works better if your CPU usage during loading is low. If your 
system is already CPU tense, better not to use this option. Besides, it depends 
on the user to specify the bounds. If user does not know the exactly bounds to 
make the data distributed evenly among the bounds, loading performance will 
still be better than before or at least the same as before.
-    * Users can find more information about this option in the description of 
PR1953.
-
-  - **SINGLE_PASS:** Single Pass Loading enables single job to finish data 
loading with dictionary generation on the fly. It enhances performance in the 
scenarios where the subsequent data loading after initial load involves fewer 
incremental updates on the dictionary.
-
-  This option specifies whether to use single pass for loading data or not. By 
default this option is set to FALSE.
-
-   ```
-    OPTIONS('SINGLE_PASS'='TRUE')
-   ```
-
-   **NOTE:**
-   * If this option is set to TRUE then data loading will take less time.
-   * If this option is set to some invalid value other than TRUE or FALSE then 
it uses the default value.
-
-   Example:
-
-   ```
-   LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
-   options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
-   'HEADER'='false',
-   'FILEHEADER'='empno,empname,designation,doj,workgroupcategory,
-   workgroupcategoryname,deptno,deptname,projectcode,
-   projectjoindate,projectenddate,attendance,utilization,salary',
-   'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$',
-   'COMPLEX_DELIMITER_LEVEL_2'=':',
-   'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary',
-   'SINGLE_PASS'='TRUE')
-   ```
-
-  - **BAD RECORDS HANDLING:** Methods of handling bad records are as follows:
-
-    * Load all of the data before dealing with the errors.
-    * Clean or delete bad records before loading data or stop the loading when 
bad records are found.
-
-    ```
-    OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true', 
'BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon', 
'BAD_RECORDS_ACTION'='REDIRECT', 'IS_EMPTY_DATA_BAD_RECORD'='false')
-    ```
-
-  **NOTE:**
-  * BAD_RECORDS_ACTION property can have four type of actions for bad records 
FORCE, REDIRECT, IGNORE and FAIL.
-  * FAIL option is its Default value. If the FAIL option is used, then data 
loading fails if any bad records are found.
-  * If the REDIRECT option is used, CarbonData will add all bad records in to 
a separate CSV file. However, this file must not be used for subsequent data 
loading because the content may not exactly match the source record. You are 
advised to cleanse the original source record for further data ingestion. This 
option is used to remind you which records are bad records.
-  * If the FORCE option is used, then it auto-converts the data by storing the 
bad records as NULL before Loading data.
-  * If the IGNORE option is used, then bad records are neither loaded nor 
written to the separate CSV file.
-  * In loaded data, if all records are bad records, the BAD_RECORDS_ACTION is 
invalid and the load operation fails.
-  * The default maximum number of characters per column is 32000. If there are 
more than 32000 characters in a column, please refer to *String longer than 
32000 characters* section.
-  * Since Bad Records Path can be specified in create, load and carbon 
properties. 
-  Therefore, value specified in load will have the highest priority, and value 
specified in carbon properties will have the least priority.
-
-   **Bad Records Path:**
-        
-   This property is used to specify the location where bad records would be 
written.
-        
-   ```
-   TBLPROPERTIES('BAD_RECORDS_PATH'='/opt/badrecords'')
-   ```
-        
-  Example:
-
-  ```
-  LOAD DATA INPATH 'filepath.csv' INTO TABLE tablename
-  
OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true','BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon',
-  'BAD_RECORDS_ACTION'='REDIRECT','IS_EMPTY_DATA_BAD_RECORD'='false')
-  ```
-
-  - **GLOBAL_SORT_PARTITIONS:** If the SORT_SCOPE is defined as GLOBAL_SORT, 
then user can specify the number of partitions to use while shuffling data for 
sort using GLOBAL_SORT_PARTITIONS. If it is not configured, or configured less 
than 1, then it uses the number of map task as reduce task. It is recommended 
that each reduce task deal with 512MB-1GB data.
-
-  ```
-  OPTIONS('GLOBAL_SORT_PARTITIONS'='2')
-  ```
-
-   NOTE:
-   * GLOBAL_SORT_PARTITIONS should be Integer type, the range is 
[1,Integer.MaxValue].
-   * It is only used when the SORT_SCOPE is GLOBAL_SORT.
-
-### INSERT DATA INTO CARBONDATA TABLE
-
-  This command inserts data into a CarbonData table, it is defined as a 
combination of two queries Insert and Select query respectively. 
-  It inserts records from a source table into a target CarbonData table, the 
source table can be a Hive table, Parquet table or a CarbonData table itself. 
-  It comes with the functionality to aggregate the records of a table by 
performing Select query on source table and load its corresponding resultant 
records into a CarbonData table.
-
-  ```
-  INSERT INTO TABLE <CARBONDATA TABLE> SELECT * FROM sourceTableName 
-  [ WHERE { <filter_condition> } ]
-  ```
-
-  You can also omit the `table` keyword and write your query as:
- 
-  ```
-  INSERT INTO <CARBONDATA TABLE> SELECT * FROM sourceTableName 
-  [ WHERE { <filter_condition> } ]
-  ```
-
-  Overwrite insert data:
-  ```
-  INSERT OVERWRITE TABLE <CARBONDATA TABLE> SELECT * FROM sourceTableName 
-  [ WHERE { <filter_condition> } ]
-  ```
-
-  **NOTE:**
-  * The source table and the CarbonData table must have the same table schema.
-  * The data type of source and destination table columns should be same
-  * INSERT INTO command does not support partial success if bad records are 
found, it will fail.
-  * Data cannot be loaded or updated in source table while insert from source 
table to target table is in progress.
-
-  Examples
-  ```
-  INSERT INTO table1 SELECT item1, sum(item2 + 1000) as result FROM table2 
group by item1
-  ```
-
-  ```
-  INSERT INTO table1 SELECT item1, item2, item3 FROM table2 where item2='xyz'
-  ```
-
-  ```
-  INSERT OVERWRITE TABLE table1 SELECT * FROM TABLE2
-  ```
-
-## UPDATE AND DELETE
-  
-### UPDATE
-  
-  This command will allow to update the CarbonData table based on the column 
expression and optional filter conditions.
-    
-  ```
-  UPDATE <table_name> 
-  SET (column_name1, column_name2, ... column_name n) = (column1_expression , 
column2_expression, ... column n_expression )
-  [ WHERE { <filter_condition> } ]
-  ```
-  
-  alternatively the following command can also be used for updating the 
CarbonData Table :
-  
-  ```
-  UPDATE <table_name>
-  SET (column_name1, column_name2) =(select sourceColumn1, sourceColumn2 from 
sourceTable [ WHERE { <filter_condition> } ] )
-  [ WHERE { <filter_condition> } ]
-  ```
-  
-  **NOTE:** The update command fails if multiple input rows in source table 
are matched with single row in destination table.
-  
-  Examples:
-  ```
-  UPDATE t3 SET (t3_salary) = (t3_salary + 9) WHERE t3_name = 'aaa1'
-  ```
-  
-  ```
-  UPDATE t3 SET (t3_date, t3_country) = ('2017-11-18', 'india') WHERE 
t3_salary < 15003
-  ```
-  
-  ```
-  UPDATE t3 SET (t3_country, t3_name) = (SELECT t5_country, t5_name FROM t5 
WHERE t5_id = 5) WHERE t3_id < 5
-  ```
-  
-  ```
-  UPDATE t3 SET (t3_date, t3_serialname, t3_salary) = (SELECT '2099-09-09', 
t5_serialname, '9999' FROM t5 WHERE t5_id = 5) WHERE t3_id < 5
-  ```
-  
-  
-  ```
-  UPDATE t3 SET (t3_country, t3_salary) = (SELECT t5_country, t5_salary FROM 
t5 FULL JOIN t3 u WHERE u.t3_id = t5_id and t5_id=6) WHERE t3_id >6
-  ```
-   NOTE: Update Complex datatype columns is not supported.
-    
-### DELETE
-
-  This command allows us to delete records from CarbonData table.
-  ```
-  DELETE FROM table_name [WHERE expression]
-  ```
-  
-  Examples:
-  
-  ```
-  DELETE FROM carbontable WHERE column1  = 'china'
-  ```
-  
-  ```
-  DELETE FROM carbontable WHERE column1 IN ('china', 'USA')
-  ```
-  
-  ```
-  DELETE FROM carbontable WHERE column1 IN (SELECT column11 FROM sourceTable2)
-  ```
-  
-  ```
-  DELETE FROM carbontable WHERE column1 IN (SELECT column11 FROM sourceTable2 
WHERE column1 = 'USA')
-  ```
-
-## COMPACTION
-
-  Compaction improves the query performance significantly. 
-  
-  There are several types of compaction.
-  
-  ```
-  ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR/CUSTOM'
-  ```
-
-  - **Minor Compaction**
-  
-  In Minor compaction, user can specify the number of loads to be merged. 
-  Minor compaction triggers for every data load if the parameter 
carbon.enable.auto.load.merge is set to true. 
-  If any segments are available to be merged, then compaction will run 
parallel with data load, there are 2 levels in minor compaction:
-  * Level 1: Merging of the segments which are not yet compacted.
-  * Level 2: Merging of the compacted segments again to form a larger segment.
-  
-  ```
-  ALTER TABLE table_name COMPACT 'MINOR'
-  ```
-  
-  - **Major Compaction**
-  
-  In Major compaction, multiple segments can be merged into one large segment. 
-  User will specify the compaction size until which segments can be merged, 
Major compaction is usually done during the off-peak time.
-  Configure the property carbon.major.compaction.size with appropriate value 
in MB.
-  
-  This command merges the specified number of segments into one segment: 
-     
-  ```
-  ALTER TABLE table_name COMPACT 'MAJOR'
-  ```
-  
-  - **Custom Compaction**
-  
-  In Custom compaction, user can directly specify segment ids to be merged 
into one large segment. 
-  All specified segment ids should exist and be valid, otherwise compaction 
will fail. 
-  Custom compaction is usually done during the off-peak time. 
-  
-  ```
-  ALTER TABLE table_name COMPACT 'CUSTOM' WHERE SEGMENT.ID IN (2,3,4)
-  ```
-  NOTE: Compaction is unsupported for table containing Complex columns.
-  
-
-  - **CLEAN SEGMENTS AFTER Compaction**
-  
-  Clean the segments which are compacted:
-  ```
-  CLEAN FILES FOR TABLE carbon_table
-  ```
-
-## PARTITION
-
-### STANDARD PARTITION
-
-  The partition is similar as spark and hive partition, user can use any 
column to build partition:
-  
-#### Create Partition Table
-
-  This command allows you to create table with partition.
-  
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
-    [(col_name data_type , ...)]
-    [COMMENT table_comment]
-    [PARTITIONED BY (col_name data_type , ...)]
-    [STORED BY file_format]
-    [TBLPROPERTIES (property_name=property_value, ...)]
-  ```
-  
-  Example:
-  ```
-   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                productNumber INT,
-                                productName STRING,
-                                storeCity STRING,
-                                storeProvince STRING,
-                                saleQuantity INT,
-                                revenue INT)
-  PARTITIONED BY (productCategory STRING, productBatch STRING)
-  STORED BY 'carbondata'
-  ```
-   NOTE: Hive partition is not supported on complex datatype columns.
-               
-#### Load Data Using Static Partition 
-
-  This command allows you to load data using static partition.
-  
-  ```
-  LOAD DATA [LOCAL] INPATH 'folder_path' 
-  INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
-  OPTIONS(property_name=property_value, ...)    
-  INSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
<SELECT STATEMENT>
-  ```
-  
-  Example:
-  ```
-  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.csv'
-  INTO TABLE locationTable
-  PARTITION (country = 'US', state = 'CA')  
-  INSERT INTO TABLE locationTable
-  PARTITION (country = 'US', state = 'AL')
-  SELECT <columns list excluding partition columns> FROM another_user
-  ```
-
-#### Load Data Using Dynamic Partition
-
-  This command allows you to load data using dynamic partition. If partition 
spec is not specified, then the partition is considered as dynamic.
-
-  Example:
-  ```
-  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.csv'
-  INTO TABLE locationTable          
-  INSERT INTO TABLE locationTable
-  SELECT <columns list excluding partition columns> FROM another_user
-  ```
-
-#### Show Partitions
-
-  This command gets the Hive partition information of the table
-
-  ```
-  SHOW PARTITIONS [db_name.]table_name
-  ```
-
-#### Drop Partition
-
-  This command drops the specified Hive partition only.
-  ```
-  ALTER TABLE table_name DROP [IF EXISTS] PARTITION (part_spec, ...)
-  ```
-  
-  Example:
-  ```
-  ALTER TABLE locationTable DROP PARTITION (country = 'US');
-  ```
-  
-#### Insert OVERWRITE
-  
-  This command allows you to insert or load overwrite on a specific partition.
-  
-  ```
-   INSERT OVERWRITE TABLE table_name
-   PARTITION (column = 'partition_name')
-   select_statement
-  ```
-  
-  Example:
-  ```
-  INSERT OVERWRITE TABLE partitioned_user
-  PARTITION (country = 'US')
-  SELECT * FROM another_user au 
-  WHERE au.country = 'US';
-  ```
-
-### CARBONDATA PARTITION(HASH,RANGE,LIST) -- Alpha feature, this partition 
feature does not support update and delete data.
-
-  The partition supports three type:(Hash,Range,List), similar to other 
system's partition features, CarbonData's partition feature can be used to 
improve query performance by filtering on the partition column.
-
-### Create Hash Partition Table
-
-  This command allows us to create hash partition.
-  
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type , ...)]
-  PARTITIONED BY (partition_col_name data_type)
-  STORED BY 'carbondata'
-  [TBLPROPERTIES ('PARTITION_TYPE'='HASH',
-                  'NUM_PARTITIONS'='N' ...)]
-  ```
-  **NOTE:** N is the number of hash partitions
-
-
-  Example:
-  ```
-  CREATE TABLE IF NOT EXISTS hash_partition_table(
-      col_A STRING,
-      col_B INT,
-      col_C LONG,
-      col_D DECIMAL(10,2),
-      col_F TIMESTAMP
-  ) PARTITIONED BY (col_E LONG)
-  STORED BY 'carbondata' 
TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='9')
-  ```
-
-### Create Range Partition Table
-
-  This command allows us to create range partition.
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type , ...)]
-  PARTITIONED BY (partition_col_name data_type)
-  STORED BY 'carbondata'
-  [TBLPROPERTIES ('PARTITION_TYPE'='RANGE',
-                  'RANGE_INFO'='2014-01-01, 2015-01-01, 2016-01-01, ...')]
-  ```
-
-  **NOTE:**
-  * The 'RANGE_INFO' must be defined in ascending order in the table 
properties.
-  * The default format for partition column of Date/Timestamp type is 
yyyy-MM-dd. Alternate formats for Date/Timestamp could be defined in 
CarbonProperties.
-
-  Example:
-  ```
-  CREATE TABLE IF NOT EXISTS range_partition_table(
-      col_A STRING,
-      col_B INT,
-      col_C LONG,
-      col_D DECIMAL(10,2),
-      col_E LONG
-   ) partitioned by (col_F Timestamp)
-   PARTITIONED BY 'carbondata'
-   TBLPROPERTIES('PARTITION_TYPE'='RANGE',
-   'RANGE_INFO'='2015-01-01, 2016-01-01, 2017-01-01, 2017-02-01')
-  ```
-
-### Create List Partition Table
-
-  This command allows us to create list partition.
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type , ...)]
-  PARTITIONED BY (partition_col_name data_type)
-  STORED BY 'carbondata'
-  [TBLPROPERTIES ('PARTITION_TYPE'='LIST',
-                  'LIST_INFO'='A, B, C, ...')]
-  ```
-  **NOTE:** List partition supports list info in one level group.
-
-  Example:
-  ```
-  CREATE TABLE IF NOT EXISTS list_partition_table(
-      col_B INT,
-      col_C LONG,
-      col_D DECIMAL(10,2),
-      col_E LONG,
-      col_F TIMESTAMP
-   ) PARTITIONED BY (col_A STRING)
-   STORED BY 'carbondata'
-   TBLPROPERTIES('PARTITION_TYPE'='LIST',
-   'LIST_INFO'='aaaa, bbbb, (cccc, dddd), eeee')
-  ```
-
-
-### Show Partitions
-
-  The following command is executed to get the partition information of the 
table
-
-  ```
-  SHOW PARTITIONS [db_name.]table_name
-  ```
-
-### Add a new partition
-
-  ```
-  ALTER TABLE [db_name].table_name ADD PARTITION('new_partition')
-  ```
-
-### Split a partition
-
-  ```
-  ALTER TABLE [db_name].table_name SPLIT PARTITION(partition_id) 
INTO('new_partition1', 'new_partition2'...)
-  ```
-
-### Drop a partition
-
-   Only drop partition definition, but keep data
-  ```
-    ALTER TABLE [db_name].table_name DROP PARTITION(partition_id)
-   ```
-
-  Drop both partition definition and data
-  ```
-  ALTER TABLE [db_name].table_name DROP PARTITION(partition_id) WITH DATA
-  ```
-
-  **NOTE:**
-  * Hash partition table is not supported for ADD, SPLIT and DROP commands.
-  * Partition Id: in CarbonData like the hive, folders are not used to divide 
partitions instead partition id is used to replace the task id. It could make 
use of the characteristic and meanwhile reduce some metadata.
-
-  ```
-  SegmentDir/0_batchno0-0-1502703086921.carbonindex
-            ^
-  SegmentDir/part-0-0_batchno0-0-1502703086921.carbondata
-                     ^
-  ```
-
-  Here are some useful tips to improve query performance of carbonData 
partition table:
-  * The partitioned column can be excluded from SORT_COLUMNS, this will let 
other columns to do the efficient sorting.
-  * When writing SQL on a partition table, try to use filters on the partition 
column.
-
-## BUCKETING
-
-  Bucketing feature can be used to distribute/organize the table/partition 
data into multiple files such
-  that similar records are present in the same file. While creating a table, 
user needs to specify the
-  columns to be used for bucketing and the number of buckets. For the 
selection of bucket the Hash value
-  of columns is used.
-
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type, ...)]
-  STORED BY 'carbondata'
-  TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
-  'BUCKETCOLUMNS'='columnname')
-  ```
-
-  **NOTE:**
-  * Bucketing cannot be performed for columns of Complex Data Types.
-  * Columns in the BUCKETCOLUMN parameter must be dimensions. The BUCKETCOLUMN 
parameter cannot be a measure or a combination of measures and dimensions.
-
-  Example:
-  ```
-  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                productNumber INT,
-                                saleQuantity INT,
-                                productName STRING,
-                                storeCity STRING,
-                                storeProvince STRING,
-                                productCategory STRING,
-                                productBatch STRING,
-                                revenue INT)
-  STORED BY 'carbondata'
-  TBLPROPERTIES ('BUCKETNUMBER'='4', 'BUCKETCOLUMNS'='productName')
-  ```
-  
-## SEGMENT MANAGEMENT  
-
-### SHOW SEGMENT
-
-  This command is used to list the segments of CarbonData table.
-
-  ```
-  SHOW [HISTORY] SEGMENTS FOR TABLE [db_name.]table_name LIMIT 
number_of_segments
-  ```
-  
-  Example:
-  Show visible segments
-  ```
-  SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
-  ```
-  Show all segments, include invisible segments
-  ```
-  SHOW HISTORY SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
-  ```
-
-### DELETE SEGMENT BY ID
-
-  This command is used to delete segment by using the segment ID. Each segment 
has a unique segment ID associated with it. 
-  Using this segment ID, you can remove the segment.
-
-  The following command will get the segmentID.
-
-  ```
-  SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
-  ```
-
-  After you retrieve the segment ID of the segment that you want to delete, 
execute the following command to delete the selected segment.
-
-  ```
-  DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.ID IN (segment_id1, 
segments_id2, ...)
-  ```
-
-  Example:
-
-  ```
-  DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0)
-  DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0,5,8)
-  ```
-
-### DELETE SEGMENT BY DATE
-
-  This command will allow to delete the CarbonData segment(s) from the store 
based on the date provided by the user in the DML command. 
-  The segment created before the particular date will be removed from the 
specific stores.
-
-  ```
-  DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.STARTTIME BEFORE 
DATE_VALUE
-  ```
-
-  Example:
-  ```
-  DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.STARTTIME BEFORE 
'2017-06-01 12:05:06' 
-  ```
-
-### QUERY DATA WITH SPECIFIED SEGMENTS
-
-  This command is used to read data from specified segments during CarbonScan.
-  
-  Get the Segment ID:
-  ```
-  SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
-  ```
-  
-  Set the segment IDs for table
-  ```
-  SET carbon.input.segments.<database_name>.<table_name> = <list of segment 
IDs>
-  ```
-  
-  **NOTE:**
-  carbon.input.segments: Specifies the segment IDs to be queried. This 
property allows you to query specified segments of the specified table. The 
CarbonScan will read data from specified segments only.
-  
-  If user wants to query with segments reading in multi threading mode, then 
CarbonSession. threadSet can be used instead of SET query.
-  ```
-  CarbonSession.threadSet 
("carbon.input.segments.<database_name>.<table_name>","<list of segment IDs>");
-  ```
-  
-  Reset the segment IDs
-  ```
-  SET carbon.input.segments.<database_name>.<table_name> = *;
-  ```
-  
-  If user wants to query with segments reading in multi threading mode, then 
CarbonSession. threadSet can be used instead of SET query. 
-  ```
-  CarbonSession.threadSet 
("carbon.input.segments.<database_name>.<table_name>","*");
-  ```
-  
-  **Examples:**
-  
-  * Example to show the list of segment IDs,segment status, and other required 
details and then specify the list of segments to be read.
-  
-  ```
-  SHOW SEGMENTS FOR carbontable1;
-  
-  SET carbon.input.segments.db.carbontable1 = 1,3,9;
-  ```
-  
-  * Example to query with segments reading in multi threading mode:
-  
-  ```
-  CarbonSession.threadSet 
("carbon.input.segments.db.carbontable_Multi_Thread","1,3");
-  ```
-  
-  * Example for threadset in multithread environment (following shows how it 
is used in Scala code):
-  
-  ```
-  def main(args: Array[String]) {
-  Future {          
-    CarbonSession.threadSet 
("carbon.input.segments.db.carbontable_Multi_Thread","1")
-    spark.sql("select count(empno) from 
carbon.input.segments.db.carbontable_Multi_Thread").show();
-     }
-   }
-  ```

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6e50c1c6/docs/datamap-developer-guide.md
----------------------------------------------------------------------
diff --git a/docs/datamap-developer-guide.md b/docs/datamap-developer-guide.md
index 31afd34..6bac9b5 100644
--- a/docs/datamap-developer-guide.md
+++ b/docs/datamap-developer-guide.md
@@ -3,14 +3,17 @@
 ### Introduction
 DataMap is a data structure that can be used to accelerate certain query of 
the table. Different DataMap can be implemented by developers. 
 Currently, there are two 2 types of DataMap supported:
-1. IndexDataMap: DataMap that leveraging index to accelerate filter query
-2. MVDataMap: DataMap that leveraging Materialized View to accelerate olap 
style query, like SPJG query (select, predicate, join, groupby)
+1. IndexDataMap: DataMap that leverages index to accelerate filter query
+2. MVDataMap: DataMap that leverages Materialized View to accelerate olap 
style query, like SPJG query (select, predicate, join, groupby)
 
 ### DataMap provider
 When user issues `CREATE DATAMAP dm ON TABLE main USING 'provider'`, the 
corresponding DataMapProvider implementation will be created and initialized. 
 Currently, the provider string can be:
-1. preaggregate: one type of MVDataMap that do pre-aggregate of single table
-2. timeseries: one type of MVDataMap that do pre-aggregate based on time 
dimension of the table
+1. preaggregate: A type of MVDataMap that do pre-aggregate of single table
+2. timeseries: A type of MVDataMap that do pre-aggregate based on time 
dimension of the table
 3. class name IndexDataMapFactory  implementation: Developer can implement new 
type of IndexDataMap by extending IndexDataMapFactory
 
-When user issues `DROP DATAMAP dm ON TABLE main`, the corresponding 
DataMapProvider interface will be called.
\ No newline at end of file
+When user issues `DROP DATAMAP dm ON TABLE main`, the corresponding 
DataMapProvider interface will be called.
+
+Details about [DataMap 
Management](./datamap/datamap-management.md#datamap-management) and supported 
[DSL](./datamap/datamap-management.md#overview) are documented 
[here](./datamap/datamap-management.md).
+

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6e50c1c6/docs/datamap/bloomfilter-datamap-guide.md
----------------------------------------------------------------------
diff --git a/docs/datamap/bloomfilter-datamap-guide.md 
b/docs/datamap/bloomfilter-datamap-guide.md
index 92810f8..b2e7d60 100644
--- a/docs/datamap/bloomfilter-datamap-guide.md
+++ b/docs/datamap/bloomfilter-datamap-guide.md
@@ -73,7 +73,7 @@ For instance, main table called **datamap_test** which is 
defined as:
     age int,
     city string,
     country string)
-  STORED BY 'carbondata'
+  STORED AS carbondata
   TBLPROPERTIES('SORT_COLUMNS'='id')
   ```
 
@@ -144,4 +144,5 @@ You can refer to the corresponding section in `CarbonData 
Lucene DataMap`.
 + In some scenarios, the BloomFilter datamap may not enhance the query 
performance significantly
  but if it can reduce the number of spark task,
  there is still a chance that BloomFilter datamap can enhance the performance 
for concurrent query.
-+ Note that BloomFilter datamap will decrease the data loading performance and 
may cause slightly storage expansion (for datamap index file).
\ No newline at end of file
++ Note that BloomFilter datamap will decrease the data loading performance and 
may cause slightly storage expansion (for datamap index file).
+

Reply via email to