update dml document

Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/commit/2f826c1b
Tree: 
http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/tree/2f826c1b
Diff: 
http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/diff/2f826c1b

Branch: refs/heads/asf-site
Commit: 2f826c1b59056e708aba202cc78385fb9e625ac3
Parents: b96a419
Author: chenliang613 <chenliang...@huawei.com>
Authored: Wed Apr 12 17:56:12 2017 +0530
Committer: chenliang613 <chenliang...@huawei.com>
Committed: Wed Apr 12 17:56:12 2017 +0530

----------------------------------------------------------------------
 content/configuration-parameters.html           |  34 +---
 content/data-management.html                    |  87 +++-------
 content/ddl-operation-on-carbondata.html        | 100 ++---------
 content/dml-operation-on-carbondata.html        | 133 ++-------------
 content/faq.html                                |  26 ---
 content/file-structure-of-carbondata.html       |  14 +-
 content/installation-guide.html                 |  89 ++++------
 content/quick-start-guide.html                  |  50 +-----
 content/supported-data-types-in-carbondata.html |   7 -
 content/troubleshooting.html                    | 169 ++++++-------------
 content/useful-tips-on-carbondata.html          |  56 ++----
 site.iml                                        |   2 +-
 src/main/webapp/configuration-parameters.html   |  34 +---
 src/main/webapp/data-management.html            |  87 +++-------
 .../webapp/ddl-operation-on-carbondata.html     | 100 ++---------
 .../webapp/dml-operation-on-carbondata.html     | 133 ++-------------
 src/main/webapp/faq.html                        |  26 ---
 .../webapp/file-structure-of-carbondata.html    |  14 +-
 src/main/webapp/installation-guide.html         |  89 ++++------
 src/main/webapp/quick-start-guide.html          |  50 +-----
 .../supported-data-types-in-carbondata.html     |   7 -
 src/main/webapp/troubleshooting.html            | 169 ++++++-------------
 src/main/webapp/useful-tips-on-carbondata.html  |  56 ++----
 23 files changed, 295 insertions(+), 1237 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/configuration-parameters.html
----------------------------------------------------------------------
diff --git a/content/configuration-parameters.html 
b/content/configuration-parameters.html
index a9274d5..b6624ca 100644
--- a/content/configuration-parameters.html
+++ b/content/configuration-parameters.html
@@ -156,26 +156,19 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="configuring-carbondata" class="anchor" href="#configuring-carbondata" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Configuring CarbonData</h1>
-
 <p>This tutorial guides you through the advanced configurations of CarbonData 
:</p>
-
 <ul>
 <li><a href="#system-configuration">System Configuration</a></li>
 <li><a href="#performance-configuration">Performance Configuration</a></li>
 <li><a href="#miscellaneous-configuration">Miscellaneous Configuration</a></li>
 <li><a href="#spark-configuration">Spark Configuration</a></li>
 </ul>
-
 <h2>
 <a id="system-configuration" class="anchor" href="#system-configuration" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>System Configuration</h2>
-
 <p>This section provides the details of all the configurations required for 
the CarbonData System.</p>
-
 <p><b></b></p><p align="center">System Configuration in carbon.properties</p>
-
 <table>
 <thead>
 <tr>
@@ -207,18 +200,13 @@
 </tr>
 </tbody>
 </table>
-
 <h2>
 <a id="performance-configuration" class="anchor" 
href="#performance-configuration" aria-hidden="true"><span aria-hidden="true" 
class="octicon octicon-link"></span></a>Performance Configuration</h2>
-
 <p>This section provides the details of all the configurations required for 
CarbonData Performance Optimization.</p>
-
 <p><b></b></p><p align="center">Performance Configuration in 
carbon.properties</p>
-
 <ul>
 <li><strong>Data Loading Configuration</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -291,11 +279,9 @@
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li><strong>Compaction Configuration</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -344,11 +330,9 @@
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li><strong>Query Configuration</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -385,17 +369,12 @@
 </tr>
 </tbody>
 </table>
-
 <h2>
 <a id="miscellaneous-configuration" class="anchor" 
href="#miscellaneous-configuration" aria-hidden="true"><span aria-hidden="true" 
class="octicon octicon-link"></span></a>Miscellaneous Configuration</h2>
-
 <p><b></b></p><p align="center">Extra Configuration in carbon.properties</p>
-
 <ul>
-<li>
-<strong>Time format for CarbonData</strong> </li>
+<li><strong>Time format for CarbonData</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -412,11 +391,9 @@
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li><strong>Dataload Configuration</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -483,11 +460,9 @@
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li><strong>Compaction Configuration</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -514,11 +489,9 @@
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li><strong>Query Configuration</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -540,11 +513,9 @@
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li><strong>Global Dictionary Configurations</strong></li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -581,12 +552,9 @@
 </tr>
 </tbody>
 </table>
-
 <h2>
 <a id="spark-configuration" class="anchor" href="#spark-configuration" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Spark Configuration</h2>
-
 <p><b></b></p><p align="center">Spark Configuration Reference in 
spark-defaults.conf</p>
-
 <table>
 <thead>
 <tr>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/data-management.html
----------------------------------------------------------------------
diff --git a/content/data-management.html b/content/data-management.html
index 63b9662..0ae1ef8 100644
--- a/content/data-management.html
+++ b/content/data-management.html
@@ -156,41 +156,33 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="data-management" class="anchor" href="#data-management" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Data Management</h1>
-
 <p>This tutorial is going to introduce you to the conceptual details of data 
management like:</p>
-
 <ul>
 <li><a href="#loading-data">Loading Data</a></li>
 <li><a href="#deleting-data">Deleting Data</a></li>
 <li><a href="#compacting-data">Compacting Data</a></li>
 <li><a href="#updating-data">Updating Data</a></li>
 </ul>
-
 <h2>
 <a id="loading-data" class="anchor" href="#loading-data" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Loading Data</h2>
-
 <ul>
 <li>
 <p><strong>Scenario</strong></p>
-
 <p>After creating a table, you can load data to the table using the <a 
href="dml-operation-on-carbondata.html">LOAD DATA</a> command. The loaded data 
is available for querying.
-When data load is triggered, the data is encoded in CarbonData format and 
copied into HDFS CarbonData store path (specified in carbon.properties file) 
+When data load is triggered, the data is encoded in CarbonData format and 
copied into HDFS CarbonData store path (specified in carbon.properties file)
 in compressed, multi dimensional columnar format for quick analysis queries. 
The same command can be used to load new data or to
-update the existing data. Only one data load can be triggered for one table. 
The high cardinality columns of the dictionary encoding are 
+update the existing data. Only one data load can be triggered for one table. 
The high cardinality columns of the dictionary encoding are
 automatically recognized and these columns will not be used for dictionary 
encoding.</p>
 </li>
 <li>
 <p><strong>Procedure</strong></p>
-
 <p>Data loading is a process that involves execution of multiple steps to 
read, sort and encode the data in CarbonData store format.
-Each step is executed on different threads. After data loading process is 
complete, the status (success/partial success) is updated to 
+Each step is executed on different threads. After data loading process is 
complete, the status (success/partial success) is updated to
 CarbonData store metadata. The table below lists the possible load status.</p>
 </li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -209,49 +201,36 @@ CarbonData store metadata. The table below lists the 
possible load status.</p>
 </tr>
 </tbody>
 </table>
-
 <p>In case of failure, the error will be logged in error log. Details of loads 
can be seen with <a href="dml-operation-on-carbondata.html">SHOW SEGMENTS</a> 
command. The show segment command output consists of :</p>
-
 <ul>
 <li>SegmentSequenceID</li>
 <li>START_TIME OF LOAD</li>
-<li>END_TIME OF LOAD </li>
-<li>
-<p>LOAD STATUS</p>
-
+<li>END_TIME OF LOAD</li>
+<li>LOAD STATUS</li>
+</ul>
 <p>The latest load will be displayed first in the output.</p>
-
 <p>Refer to <a href="dml-operation-on-carbondata.html">DML operations on 
CarbonData</a> for load commands.</p>
-</li>
-</ul>
-
 <h2>
 <a id="deleting-data" class="anchor" href="#deleting-data" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Deleting Data</h2>
-
 <ul>
 <li>
 <p><strong>Scenario</strong></p>
-
-<p>If you have loaded wrong data into the table, or too many bad records are 
present and you want to modify and reload the data, you can delete required 
data loads. 
+<p>If you have loaded wrong data into the table, or too many bad records are 
present and you want to modify and reload the data, you can delete required 
data loads.
 The load can be deleted using the Segment Sequence Id or if the table contains 
date field then the data can be deleted using the date field.
 If there are some specific records that need to be deleted based on some 
filter condition(s) we can delete by records.</p>
 </li>
 <li>
-<p><strong>Procedure</strong> </p>
-
+<p><strong>Procedure</strong></p>
 <p>The loaded data can be deleted in the following ways:</p>
-
 <ul>
 <li>
 <p>Delete by Segment ID</p>
-
 <p>After you get the segment ID of the segment that you want to delete, 
execute the delete command for the selected segment.
 The status of deleted segment is updated to Marked for delete / Marked for 
Update.</p>
 </li>
 </ul>
 </li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -282,77 +261,65 @@ The status of deleted segment is updated to Marked for 
delete / Marked for Updat
 </tr>
 </tbody>
 </table>
-
 <ul>
 <li>
 <p>Delete by Date Field</p>
-
 <p>If the table contains date field, you can delete the data based on a 
specific date.</p>
 </li>
 <li>
 <p>Delete by Record</p>
-
 <p>To delete records from CarbonData table based on some filter 
Condition(s).</p>
-
 <p>For delete commands refer to <a href="dml-operation-on-carbondata.html">DML 
operations on CarbonData</a>.</p>
 </li>
 <li>
 <p><strong>NOTE</strong>:</p>
-
 <ul>
-<li><p>When the delete segment DML is called, segment will not be deleted 
physically from the file system. Instead the segment status will be marked as 
"Marked for Delete". For the query execution, this deleted segment will be 
excluded.</p></li>
-<li><p>The deleted segment will be deleted physically during the next load 
operation and only after the maximum query execution time configured using 
"max.query.execution.time". By default it is 60 minutes.</p></li>
-<li><p>If the user wants to force delete the segment physically then he can 
use CLEAN FILES Command.</p></li>
+<li>
+<p>When the delete segment DML is called, segment will not be deleted 
physically from the file system. Instead the segment status will be marked as 
"Marked for Delete". For the query execution, this deleted segment will be 
excluded.</p>
+</li>
+<li>
+<p>The deleted segment will be deleted physically during the next load 
operation and only after the maximum query execution time configured using 
"max.query.execution.time". By default it is 60 minutes.</p>
+</li>
+<li>
+<p>If the user wants to force delete the segment physically then he can use 
CLEAN FILES Command.</p>
+</li>
 </ul>
 </li>
 </ul>
-
 <p>Example :</p>
-
 <pre><code>CLEAN FILES FOR TABLE table1
 </code></pre>
-
 <p>This DML will physically delete the segment which are "Marked for delete" 
immediately.</p>
-
 <h2>
 <a id="compacting-data" class="anchor" href="#compacting-data" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Compacting Data</h2>
-
 <ul>
 <li>
 <p><strong>Scenario</strong></p>
-
-<p>Frequent data ingestion results in several fragmented CarbonData files in 
the store directory. Since data is sorted only within each load, the indices 
perform only within each 
-load. This means that there will be one index for each load and as number of 
data load increases, the number of indices also increases. As each index works 
only on one load, 
-the performance of indices is reduced. CarbonData provides provision for 
compacting the loads. Compaction process combines several segments into one 
large segment by merge sorting the data from across the segments.  </p>
+<p>Frequent data ingestion results in several fragmented CarbonData files in 
the store directory. Since data is sorted only within each load, the indices 
perform only within each
+load. This means that there will be one index for each load and as number of 
data load increases, the number of indices also increases. As each index works 
only on one load,
+the performance of indices is reduced. CarbonData provides provision for 
compacting the loads. Compaction process combines several segments into one 
large segment by merge sorting the data from across the segments.</p>
 </li>
 <li>
 <p><strong>Procedure</strong></p>
-
 <p>There are two types of compaction Minor and Major compaction.</p>
-
 <ul>
 <li>
 <p><strong>Minor Compaction</strong></p>
-
-<p>In minor compaction the user can specify how many loads to be merged. Minor 
compaction triggers for every data load if the parameter 
carbon.enable.auto.load.merge is set. If any segments are available to be 
merged, then compaction will 
+<p>In minor compaction the user can specify how many loads to be merged. Minor 
compaction triggers for every data load if the parameter 
carbon.enable.auto.load.merge is set. If any segments are available to be 
merged, then compaction will
 run parallel with data load. There are 2 levels in minor compaction.</p>
-
 <ul>
 <li>Level 1: Merging of the segments which are not yet compacted.</li>
-<li>Level 2: Merging of the compacted segments again to form a bigger segment. 
</li>
+<li>Level 2: Merging of the compacted segments again to form a bigger 
segment.</li>
 </ul>
 </li>
 <li>
 <p><strong>Major Compaction</strong></p>
-
-<p>In Major compaction, many segments can be merged into one big segment. User 
will specify the compaction size until which segments can be merged. Major 
compaction is usually done during the off-peak time. </p>
+<p>In Major compaction, many segments can be merged into one big segment. User 
will specify the compaction size until which segments can be merged. Major 
compaction is usually done during the off-peak time.</p>
 </li>
 </ul>
-
-<p>There are number of parameters related to Compaction that can be set in 
carbon.properties file </p>
+<p>There are number of parameters related to Compaction that can be set in 
carbon.properties file</p>
 </li>
 </ul>
-
 <table>
 <thead>
 <tr>
@@ -401,24 +368,18 @@ run parallel with data load. There are 2 levels in minor 
compaction.</p>
 </tr>
 </tbody>
 </table>
-
 <p>For compaction commands refer to <a 
href="ddl-operation-on-carbondata.html">DDL operations on CarbonData</a></p>
-
 <h2>
 <a id="updating-data" class="anchor" href="#updating-data" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Updating Data</h2>
-
 <ul>
 <li>
 <p><strong>Scenario</strong></p>
-
 <p>Sometimes after the data has been ingested into the System, it is required 
to be updated. Also there may be situations where some specific columns need to 
be updated
 on the basis of column expression and optional filter conditions.</p>
 </li>
 <li>
 <p><strong>Procedure</strong></p>
-
 <p>To update we need to specify the column expression with an optional filter 
condition(s).</p>
-
 <p>For update commands refer to <a href="dml-operation-on-carbondata.html">DML 
operations on CarbonData</a>.</p>
 </li>
 </ul>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/ddl-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/content/ddl-operation-on-carbondata.html 
b/content/ddl-operation-on-carbondata.html
index 24346a0..a12d15c 100644
--- a/content/ddl-operation-on-carbondata.html
+++ b/content/ddl-operation-on-carbondata.html
@@ -156,17 +156,12 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="ddl-operations-on-carbondata" class="anchor" 
href="#ddl-operations-on-carbondata" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>DDL Operations on 
CarbonData</h1>
-
 <p>This tutorial guides you through the data definition language support 
provided by CarbonData.</p>
-
 <h2>
 <a id="overview" class="anchor" href="#overview" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Overview</h2>
-
 <p>The following DDL operations are supported in CarbonData :</p>
-
 <ul>
 <li><a href="#create-table">CREATE TABLE</a></li>
 <li><a href="#show-table">SHOW TABLE</a></li>
@@ -175,22 +170,17 @@
 <li><a href="#bucketing">BUCKETING</a></li>
 <li><a href="#table-rename">TABLE RENAME</a></li>
 </ul>
-
 <h2>
 <a id="create-table" class="anchor" href="#create-table" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>CREATE TABLE</h2>
-
 <p>This command can be used to create a CarbonData table by specifying the 
list of fields along with the table properties.</p>
-
 <pre><code>   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
                     [(col_name data_type , ...)]
    STORED BY 'carbondata'
    [TBLPROPERTIES (property_name=property_value, ...)]
    // All Carbon's additional table options will go into properties
 </code></pre>
-
 <h3>
 <a id="parameter-description" class="anchor" href="#parameter-description" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -227,76 +217,59 @@
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="usage-guidelines" class="anchor" href="#usage-guidelines" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h3>
-
 <p>Following are the guidelines for using table properties.</p>
-
 <ul>
 <li>
 <p><strong>Dictionary Encoding Configuration</strong></p>
-
 <p>Dictionary encoding is enabled by default for all String columns, and 
disabled for non-String columns. You can include and exclude columns for 
dictionary encoding.</p>
 </li>
 </ul>
-
 <pre><code>       TBLPROPERTIES ('DICTIONARY_EXCLUDE'='column1, column2')
        TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
 </code></pre>
-
 <p>Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is 
applicable for high-cardinality columns and is an optional parameter. 
DICTIONARY_INCLUDE will generate dictionary for the columns specified in the 
list.</p>
-
 <ul>
 <li>
 <p><strong>Row/Column Format Configuration</strong></p>
-
 <p>Column groups with more than one column are stored in row format, instead 
of columnar format. By default, each column is a separate column group.</p>
 </li>
 </ul>
-
 <pre><code>TBLPROPERTIES ('COLUMN_GROUPS'='(column1, column3),
 (Column4,Column5,Column6)')
 </code></pre>
-
 <ul>
 <li>
 <p><strong>Table Block Size Configuration</strong></p>
-
 <p>The block size of table files can be defined using the property 
TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB 
and supports a range of 1 MB to 2048 MB.
- If you do not specify this value in the DDL command, default value is 
used.</p>
+If you do not specify this value in the DDL command, default value is used.</p>
 </li>
 </ul>
-
 <pre><code>       TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
 </code></pre>
-
 <p>Here 512 MB means the block size of this table is 512 MB, you can also set 
it as 512M or 512.</p>
-
 <ul>
 <li>
 <p><strong>Inverted Index Configuration</strong></p>
-
 <p>Inverted index is very useful to improve compression ratio and query speed, 
especially for those low-cardinality columns who are in reward position.
-  By default inverted index is enabled. The user can disable the inverted 
index creation for some columns.</p>
+By default inverted index is enabled. The user can disable the inverted index 
creation for some columns.</p>
 </li>
 </ul>
-
 <pre><code>       TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
 </code></pre>
-
 <p>No inverted index shall be generated for the columns specified in 
NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality 
and is an optional parameter.</p>
-
 <p>NOTE:</p>
-
 <ul>
-<li><p>By default all columns other than numeric datatype are treated as 
dimensions and all columns of numeric datatype are treated as measures.</p></li>
-<li><p>All dimensions except complex datatype columns are part of multi 
dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. 
If the user wants to keep any column (except columns of complex datatype) in 
multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE 
or DICTIONARY_INCLUDE.</p></li>
+<li>
+<p>By default all columns other than numeric datatype are treated as 
dimensions and all columns of numeric datatype are treated as measures.</p>
+</li>
+<li>
+<p>All dimensions except complex datatype columns are part of multi 
dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. 
If the user wants to keep any column (except columns of complex datatype) in 
multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE 
or DICTIONARY_INCLUDE.</p>
+</li>
 </ul>
-
 <h3>
 <a id="example" class="anchor" href="#example" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code>    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                    productNumber Int,
                                    productName String,
@@ -312,18 +285,13 @@
                      'DICTIONARY_INCLUDE'='productNumber',
                      'NO_INVERTED_INDEX'='productBatch')
 </code></pre>
-
 <h2>
 <a id="show-table" class="anchor" href="#show-table" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>SHOW TABLE</h2>
-
 <p>This command can be used to list all the tables in current database or all 
the tables of a specific database.</p>
-
 <pre><code>  SHOW TABLES [IN db_Name];
 </code></pre>
-
 <h3>
 <a id="parameter-description-1" class="anchor" href="#parameter-description-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -340,24 +308,17 @@
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="example-1" class="anchor" href="#example-1" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code>  SHOW TABLES IN ProductSchema;
 </code></pre>
-
 <h2>
 <a id="drop-table" class="anchor" href="#drop-table" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>DROP TABLE</h2>
-
 <p>This command is used to delete an existing table.</p>
-
 <pre><code>  DROP TABLE [IF EXISTS] [db_name.]table_name;
 </code></pre>
-
 <h3>
 <a id="parameter-description-2" class="anchor" href="#parameter-description-2" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -379,26 +340,18 @@
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="example-2" class="anchor" href="#example-2" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code>  DROP TABLE IF EXISTS productSchema.productSalesTable;
 </code></pre>
-
 <h2>
 <a id="compaction" class="anchor" href="#compaction" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>COMPACTION</h2>
-
 <p>This command merges the specified number of segments into one segment. This 
enhances the query performance of the table.</p>
-
 <pre><code>  ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR';
 </code></pre>
-
 <p>To get details about Compaction refer to <a 
href="data-management.html">Data Management</a></p>
-
 <h3>
 <a id="parameter-description-3" class="anchor" href="#parameter-description-3" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -420,42 +373,32 @@
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="syntax" class="anchor" href="#syntax" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
-
 <ul>
 <li><strong>Minor Compaction</strong></li>
 </ul>
-
 <pre><code>ALTER TABLE table_name COMPACT 'MINOR';
 </code></pre>
-
 <ul>
 <li><strong>Major Compaction</strong></li>
 </ul>
-
 <pre><code>ALTER TABLE table_name COMPACT 'MAJOR';
 </code></pre>
-
 <h2>
 <a id="bucketing" class="anchor" href="#bucketing" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>BUCKETING</h2>
-
 <p>Bucketing feature can be used to distribute/organize the table/partition 
data into multiple files such
 that similar records are present in the same file. While creating a table, a 
user needs to specify the
 columns to be used for bucketing and the number of buckets. For the selction 
of bucket the Hash value
 of columns is used.</p>
-
 <pre><code>   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
                     [(col_name data_type, ...)]
    STORED BY 'carbondata'
    TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
    'BUCKETCOLUMNS'='columnname')
 </code></pre>
-
 <h2>
 <a id="parameter-description-4" class="anchor" href="#parameter-description-4" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h2>
-
 <table>
 <thead>
 <tr>
@@ -477,19 +420,21 @@ of columns is used.</p>
 </tr>
 </tbody>
 </table>
-
 <h2>
 <a id="usage-guidelines-1" class="anchor" href="#usage-guidelines-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h2>
-
 <ul>
-<li><p>The feature is supported for Spark 1.6.2 onwards, but the performance 
optimization is evident from Spark 2.1 onwards.</p></li>
-<li><p>Bucketing can not be performed for columns of Complex Data 
Types.</p></li>
-<li><p>Columns in the BUCKETCOLUMN parameter must be only dimension. The 
BUCKETCOLUMN parameter can not be a measure or a combination of measures and 
dimensions.</p></li>
+<li>
+<p>The feature is supported for Spark 1.6.2 onwards, but the performance 
optimization is evident from Spark 2.1 onwards.</p>
+</li>
+<li>
+<p>Bucketing can not be performed for columns of Complex Data Types.</p>
+</li>
+<li>
+<p>Columns in the BUCKETCOLUMN parameter must be only dimension. The 
BUCKETCOLUMN parameter can not be a measure or a combination of measures and 
dimensions.</p>
+</li>
 </ul>
-
 <h2>
 <a id="example-" class="anchor" href="#example-" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example :</h2>
-
 <pre><code> CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                 productNumber Int,
                                 productName String,
@@ -507,21 +452,15 @@ of columns is used.</p>
                   'BUCKETNUMBER'='4',
                   'BUCKETCOLUMNS'='productName')
 </code></pre>
-
 <h2>
 <a id="table-rename" class="anchor" href="#table-rename" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>TABLE RENAME</h2>
-
 <p>This command is used to rename the existing table.</p>
-
 <h3>
 <a id="syntax-1" class="anchor" href="#syntax-1" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
-
 <pre><code>   ALTER TABLE [db_name.]table_name RENAME TO new_table_name;
 </code></pre>
-
 <h3>
 <a id="parameter-description-5" class="anchor" href="#parameter-description-5" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -544,20 +483,15 @@ of columns is used.</p>
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="usage-guidelines-2" class="anchor" href="#usage-guidelines-2" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h3>
-
 <p>Following conditions must be met for successful rename operation:</p>
-
 <ul>
 <li>Queries running in parallel which requires the formation of path using the 
table name for reading carbon store files might fail during this operation.</li>
 <li>Secondary index table rename is not permitted.</li>
 </ul>
-
 <h3>
 <a id="example-3" class="anchor" href="#example-3" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code>    ALTER TABLE carbon RENAME TO carbondata;
 
    ALTER TABLE test_db.carbon RENAME TO test_db.carbondata;

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/dml-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/content/dml-operation-on-carbondata.html 
b/content/dml-operation-on-carbondata.html
index c05e461..52a70db 100644
--- a/content/dml-operation-on-carbondata.html
+++ b/content/dml-operation-on-carbondata.html
@@ -156,17 +156,12 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="dml-operations-on-carbondata" class="anchor" 
href="#dml-operations-on-carbondata" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>DML Operations on 
CarbonData</h1>
-
 <p>This tutorial guides you through the data manipulation language support 
provided by CarbonData.</p>
-
 <h2>
 <a id="overview" class="anchor" href="#overview" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Overview</h2>
-
 <p>The following DML operations are supported in CarbonData :</p>
-
 <ul>
 <li><a href="#load-data">LOAD DATA</a></li>
 <li><a href="#insert-data-into-a-carbondata-table">INSERT DATA INTO A 
CARBONDATA TABLE</a></li>
@@ -176,28 +171,20 @@
 <li><a href="#update-carbondata-table">UPDATE CARBONDATA TABLE</a></li>
 <li><a href="#delete-records-from-carbondata-table">DELETE RECORDS FROM 
CARBONDATA TABLE</a></li>
 </ul>
-
 <h2>
 <a id="load-data" class="anchor" href="#load-data" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>LOAD DATA</h2>
-
 <p>This command loads the user data in raw format to the CarbonData specific 
data format store, this allows CarbonData to provide good performance while 
querying the data.
 Please visit <a href="data-management.html">Data Management</a> for more 
details on LOAD.</p>
-
 <h3>
 <a id="syntax" class="anchor" href="#syntax" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
-
 <pre><code>LOAD DATA [LOCAL] INPATH 'folder_path' 
 INTO TABLE [db_name.]table_name 
 OPTIONS(property_name=property_value, ...)
 </code></pre>
-
 <p>OPTIONS are not mandatory for data loading process. Inside OPTIONS user can 
provide either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE 
as per requirement.</p>
-
 <p>NOTE: The path shall be canonical path.</p>
-
 <h3>
 <a id="parameter-description" class="anchor" href="#parameter-description" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -229,106 +216,86 @@ OPTIONS(property_name=property_value, ...)
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="usage-guidelines" class="anchor" href="#usage-guidelines" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h3>
-
 <p>You can use the following options to load data:</p>
-
 <ul>
 <li>
 <p><strong>DELIMITER:</strong> Delimiters can be provided in the load 
command.</p>
-
 <pre><code>OPTIONS('DELIMITER'=',')
 </code></pre>
 </li>
 <li>
 <p><strong>QUOTECHAR:</strong> Quote Characters can be provided in the load 
command.</p>
-
 <pre><code>OPTIONS('QUOTECHAR'='"')
 </code></pre>
 </li>
 <li>
 <p><strong>COMMENTCHAR:</strong> Comment Characters can be provided in the 
load command if user want to comment lines.</p>
-
 <pre><code>OPTIONS('COMMENTCHAR'='#')
 </code></pre>
 </li>
 <li>
 <p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD DATA 
command if headers are missing in the source files.</p>
-
 <pre><code>OPTIONS('FILEHEADER'='column1,column2') 
 </code></pre>
 </li>
 <li>
 <p><strong>MULTILINE:</strong> CSV with new line character in quotes.</p>
-
 <pre><code>OPTIONS('MULTILINE'='true') 
 </code></pre>
 </li>
 <li>
 <p><strong>ESCAPECHAR:</strong> Escape char can be provided if user want 
strict validation of escape character on CSV.</p>
-
 <pre><code>OPTIONS('ESCAPECHAR'='\') 
 </code></pre>
 </li>
 <li>
 <p><strong>COMPLEX_DELIMITER_LEVEL_1:</strong> Split the complex type data 
column in a row (eg., a$b$c --&gt; Array = {a,b,c}).</p>
-
 <pre><code>OPTIONS('COMPLEX_DELIMITER_LEVEL_1'='$') 
 </code></pre>
 </li>
 <li>
 <p><strong>COMPLEX_DELIMITER_LEVEL_2:</strong> Split the complex type nested 
data column in a row. Applies level_1 delimiter &amp; applies level_2 based on 
complex data type (eg., a:b$c:d --&gt; Array&gt; = {{a,b},{c,d}}).</p>
-
 <pre><code>OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':') 
 </code></pre>
 </li>
 <li>
 <p><strong>ALL_DICTIONARY_PATH:</strong> All dictionary files path.</p>
-
 <pre><code>OPTIONS('ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary')
 </code></pre>
 </li>
 <li>
 <p><strong>COLUMNDICT:</strong> Dictionary file path for specified column.</p>
-
 <pre><code>OPTIONS('COLUMNDICT'='column1:dictionaryFilePath1,
 column2:dictionaryFilePath2')
 </code></pre>
-
 <p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.</p>
 </li>
 <li>
 <p><strong>DATEFORMAT:</strong> Date format for specified column.</p>
-
 <pre><code>OPTIONS('DATEFORMAT'='column1:dateFormat1, column2:dateFormat2')
 </code></pre>
-
 <p>NOTE: Date formats are specified by date pattern strings. The date pattern 
letters in CarbonData are same as in JAVA. Refer to <a 
href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html"; 
target=_blank>SimpleDateFormat</a>.</p>
 </li>
 <li>
 <p><strong>SINGLE_PASS:</strong> Single Pass Loading enables single job to 
finish data loading with dictionary generation on the fly. It enhances 
performance in the scenarios where the subsequent data loading after initial 
load involves fewer incremental updates on the dictionary.</p>
-
 <p>This option specifies whether to use single pass for loading data or not. 
By default this option is set to FALSE.</p>
-
 <pre><code>OPTIONS('SINGLE_PASS'='TRUE')
 </code></pre>
-
 <p>Note :</p>
-
 <ul>
-<li><p>If this option is set to TRUE then data loading will take less 
time.</p></li>
+<li>
+<p>If this option is set to TRUE then data loading will take less time.</p>
+</li>
 <li>
 <p>If this option is set to some invalid value other than TRUE or FALSE then 
it uses the default value.</p>
-
-<h3>
-<a id="example" class="anchor" href="#example" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
 </li>
 </ul>
 </li>
 </ul>
-
+<h3>
+<a id="example" class="anchor" href="#example" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
 <pre><code>LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table 
carbontable
 options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
 'FILEHEADER'='empno,empname,designation,doj,workgroupcategory,
@@ -340,30 +307,21 @@ options('DELIMITER'=',', 
'QUOTECHAR'='"','COMMENTCHAR'='#',
 'SINGLE_PASS'='TRUE'
 )
 </code></pre>
-
 <h2>
 <a id="insert-data-into-a-carbondata-table" class="anchor" 
href="#insert-data-into-a-carbondata-table" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>INSERT DATA INTO A 
CARBONDATA TABLE</h2>
-
 <p>This command inserts data into a CarbonData table. It is defined as a 
combination of two queries Insert and Select query respectively. It inserts 
records from a source table into a target CarbonData table. The source table 
can be a Hive table, Parquet table or a CarbonData table itself. It comes with 
the functionality to aggregate the records of a table by performing Select 
query on source table and load its corresponding resultant records into a 
CarbonData table.</p>
-
 <p><strong>NOTE</strong> :  The client node where the INSERT command is 
executing, must be part of the cluster.</p>
-
 <h3>
 <a id="syntax-1" class="anchor" href="#syntax-1" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
-
 <pre><code>INSERT INTO TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM 
sourceTableName 
 [ WHERE { &lt;filter_condition&gt; } ];
 </code></pre>
-
 <p>You can also omit the <code>table</code> keyword and write your query 
as:</p>
-
 <pre><code>INSERT INTO &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
 [ WHERE { &lt;filter_condition&gt; } ];
 </code></pre>
-
 <h3>
 <a id="parameter-description-1" class="anchor" href="#parameter-description-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -382,12 +340,9 @@ options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="usage-guidelines-1" class="anchor" href="#usage-guidelines-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h3>
-
 <p>The following condition must be met for successful insert operation :</p>
-
 <ul>
 <li>The source table and the CarbonData table must have the same table 
schema.</li>
 <li>The table must be created.</li>
@@ -396,46 +351,32 @@ options('DELIMITER'=',', 
'QUOTECHAR'='"','COMMENTCHAR'='#',
 <li>INSERT INTO command does not support partial success if bad records are 
found, it will fail.</li>
 <li>Data cannot be loaded or updated in source table while insert from source 
table to target table is in progress.</li>
 </ul>
-
 <p>To enable data load or update during insert operation, configure the 
following property to true.</p>
-
 <pre><code>carbon.insert.persist.enable=true
 </code></pre>
-
 <p>By default the above configuration will be false.</p>
-
 <p><strong>NOTE</strong>: Enabling this property will reduce the 
performance.</p>
-
 <h3>
 <a id="examples" class="anchor" href="#examples" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Examples</h3>
-
 <pre><code>INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM 
 table2 group by item1;
 </code></pre>
-
 <pre><code>INSERT INTO table1 SELECT item1, item2, item3 FROM table2 
 where item2='xyz';
 </code></pre>
-
 <pre><code>INSERT INTO table1 SELECT * FROM table2 
 where exists (select * from table3 
 where table2.item1 = table3.item1);
 </code></pre>
-
 <p><strong>The Status Success/Failure shall be captured in the driver 
log.</strong></p>
-
 <h2>
 <a id="show-segments" class="anchor" href="#show-segments" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>SHOW SEGMENTS</h2>
-
 <p>This command is used to get the segments of CarbonData table.</p>
-
 <pre><code>SHOW SEGMENTS FOR TABLE [db_name.]table_name 
 LIMIT number_of_segments;
 </code></pre>
-
 <h3>
 <a id="parameter-description-2" class="anchor" href="#parameter-description-2" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -462,33 +403,23 @@ LIMIT number_of_segments;
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="example-1" class="anchor" href="#example-1" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code>SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
 </code></pre>
-
 <h2>
 <a id="delete-segment-by-id" class="anchor" href="#delete-segment-by-id" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>DELETE SEGMENT BY ID</h2>
-
-<p>This command is used to delete segment by using the segment ID. Each 
segment has a unique segment ID associated with it. 
+<p>This command is used to delete segment by using the segment ID. Each 
segment has a unique segment ID associated with it.
 Using this segment ID, you can remove the segment.</p>
-
 <p>The following command will get the segmentID.</p>
-
 <pre><code>SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
 </code></pre>
-
 <p>After you retrieve the segment ID of the segment that you want to delete, 
execute the following command to delete the selected segment.</p>
-
 <pre><code>DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, .... 
 FROM TABLE tableName
 </code></pre>
-
 <h3>
 <a id="parameter-description-3" class="anchor" href="#parameter-description-3" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -515,29 +446,21 @@ FROM TABLE tableName
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="example-2" class="anchor" href="#example-2" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code>DELETE SEGMENT 0 FROM TABLE CarbonDatabase.CarbonTable;
 DELETE SEGMENT 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
 </code></pre>
-
-<p>NOTE: Here 0.1 is compacted segment sequence id. </p>
-
+<p>NOTE: Here 0.1 is compacted segment sequence id.</p>
 <h2>
 <a id="delete-segment-by-date" class="anchor" href="#delete-segment-by-date" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>DELETE SEGMENT BY DATE</h2>
-
-<p>This command will allow to delete the CarbonData segment(s) from the store 
based on the date provided by the user in the DML command. 
+<p>This command will allow to delete the CarbonData segment(s) from the store 
based on the date provided by the user in the DML command.
 The segment created before the particular date will be removed from the 
specific stores.</p>
-
-<pre><code>DELETE FROM TABLE [schema_name.]table_name 
-WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
+<pre><code>DELETE SEGMENTS FROM TABLE [db_name.]table_name 
+WHERE STARTTIME BEFORE DATE_VALUE
 </code></pre>
-
 <h3>
 <a id="parameter-description-4" class="anchor" href="#parameter-description-4" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -564,40 +487,30 @@ WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="example-3" class="anchor" href="#example-3" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
-
 <pre><code> DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable 
  WHERE STARTTIME BEFORE '2017-06-01 12:05:06';  
 </code></pre>
-
 <h2>
 <a id="update-carbondata-table" class="anchor" href="#update-carbondata-table" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Update CarbonData Table</h2>
-
 <p>This command will allow to update the carbon table based on the column 
expression and optional filter conditions.</p>
-
 <h3>
 <a id="syntax-2" class="anchor" href="#syntax-2" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
-
 <pre><code> UPDATE &lt;table_name&gt;
  SET (column_name1, column_name2, ... column_name n) =
  (column1_expression , column2_expression . .. column n_expression )
  [ WHERE { &lt;filter_condition&gt; } ];
 </code></pre>
-
 <p>alternatively the following the command can also be used for updating the 
CarbonData Table :</p>
-
 <pre><code>UPDATE &lt;table_name&gt;
 SET (column_name1, column_name2,) =
 (select sourceColumn1, sourceColumn2 from sourceTable
 [ WHERE { &lt;filter_condition&gt; } ] )
 [ WHERE { &lt;filter_condition&gt; } ];
 </code></pre>
-
 <h3>
 <a id="parameter-description-5" class="anchor" href="#parameter-description-5" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -624,12 +537,9 @@ SET (column_name1, column_name2,) =
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="usage-guidelines-2" class="anchor" href="#usage-guidelines-2" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h3>
-
 <p>The following conditions must be met for successful updation :</p>
-
 <ul>
 <li>The update command fails if multiple input rows in source table are 
matched with single row in destination table.</li>
 <li>If the source table generates empty records, the update operation will 
complete successfully without updating the table.</li>
@@ -637,59 +547,43 @@ SET (column_name1, column_name2,) =
 <li>In sub-query, if the source table and the target table are same, then the 
update operation fails.</li>
 <li>If the sub-query used in UPDATE statement contains aggregate method or 
group by query, then the UPDATE operation fails.</li>
 </ul>
-
 <h3>
 <a id="examples-1" class="anchor" href="#examples-1" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Examples</h3>
-
 <p>Update is not supported for queries that contain aggregate or group by.</p>
-
 <pre><code> UPDATE t_carbn01 a
  SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
  sum(b.profit) from t_carbn01b b
  WHERE item_type_cd =2 group by item_type_code);
 </code></pre>
-
 <p>Here the Update Operation fails as the query contains aggregate function 
sum(b.profit) and group by clause in the sub-query.</p>
-
 <pre><code>UPDATE carbonTable1 d
 SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
 FROM sourceTable1 s WHERE d.column1 = s.c11)
 WHERE d.column1 = 'china' EXISTS( SELECT * from table3 o where o.c2 &gt; 1);
 </code></pre>
-
 <pre><code>UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
 WHERE d.column1 = s.c11)
 WHERE exists( select * from iud.other o where o.c2 &gt; 1);
 </code></pre>
-
 <pre><code>UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , "y" ));
 </code></pre>
-
 <pre><code>UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, "xyx")
 WHERE d.column1 = 'india';
 </code></pre>
-
 <pre><code>UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, "xyx")
 WHERE d.column1 = 'india'
 and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 1);
 </code></pre>
-
 <p><strong>The Status Success/Failure shall be captured in the driver log and 
the client.</strong></p>
-
 <h2>
 <a id="delete-records-from-carbondata-table" class="anchor" 
href="#delete-records-from-carbondata-table" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Delete Records from 
CarbonData Table</h2>
-
 <p>This command allows us to delete records from CarbonData table.</p>
-
 <h3>
 <a id="syntax-3" class="anchor" href="#syntax-3" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
-
 <pre><code>DELETE FROM table_name [WHERE expression];
 </code></pre>
-
 <h3>
 <a id="parameter-description-6" class="anchor" href="#parameter-description-6" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Parameter Description</h3>
-
 <table>
 <thead>
 <tr>
@@ -704,28 +598,21 @@ and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 
1);
 </tr>
 </tbody>
 </table>
-
 <h3>
 <a id="examples-2" class="anchor" href="#examples-2" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Examples</h3>
-
 <pre><code>DELETE FROM columncarbonTable1 d WHERE d.column1  = 'china';
 </code></pre>
-
 <pre><code>DELETE FROM dest WHERE column1 IN ('china', 'USA');
 </code></pre>
-
 <pre><code>DELETE FROM columncarbonTable1
 WHERE column1 IN (SELECT column11 FROM sourceTable2);
 </code></pre>
-
 <pre><code>DELETE FROM columncarbonTable1
 WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
 column1 = 'USA');
 </code></pre>
-
 <pre><code>DELETE FROM columncarbonTable1 WHERE column2 &gt;= 4
 </code></pre>
-
 <p><strong>The Status Success/Failure shall be captured in the driver log and 
the client.</strong></p>
 </div>
 </div>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/faq.html
----------------------------------------------------------------------
diff --git a/content/faq.html b/content/faq.html
index fe58fe6..8567346 100644
--- a/content/faq.html
+++ b/content/faq.html
@@ -156,10 +156,8 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="faqs" class="anchor" href="#faqs" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>FAQs</h1>
-
 <ul>
 <li><a href="#what-are-bad-records">What are Bad Records?</a></li>
 <li><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad 
Records Stored in CarbonData?</a></li>
@@ -169,76 +167,52 @@
 <li><a href="#what-is-carbon-lock-type">What is Carbon Lock Type?</a></li>
 <li><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract 
Method Error?</a></li>
 </ul>
-
 <h2>
 <a id="what-are-bad-records" class="anchor" href="#what-are-bad-records" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>What are Bad Records?</h2>
-
 <p>Records that fail to get loaded into the CarbonData due to data type 
incompatibility or are empty or have incompatible format are classified as Bad 
Records.</p>
-
 <h2>
 <a id="where-are-bad-records-stored-in-carbondata" class="anchor" 
href="#where-are-bad-records-stored-in-carbondata" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Where are Bad 
Records Stored in CarbonData?</h2>
-
 <p>The bad records are stored at the location set in 
carbon.badRecords.location in carbon.properties file.
 By default <strong>carbon.badRecords.location</strong> specifies the following 
location <code>/opt/Carbon/Spark/badrecords</code>.</p>
-
 <h2>
 <a id="how-to-enable-bad-record-logging" class="anchor" 
href="#how-to-enable-bad-record-logging" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>How to enable Bad 
Record Logging?</h2>
-
 <p>While loading data we can specify the approach to handle Bad Records. In 
order to analyse the cause of the Bad Records the parameter 
<code>BAD_RECORDS_LOGGER_ENABLE</code> must be set to value <code>TRUE</code>. 
There are multiple approaches to handle Bad Records which can be specified  by 
the parameter <code>BAD_RECORDS_ACTION</code>.</p>
-
 <ul>
 <li>To pad the incorrect values of the csv rows with NULL value and load the 
data in CarbonData, set the following in the query :</li>
 </ul>
-
 <pre><code>'BAD_RECORDS_ACTION'='FORCE'
 </code></pre>
-
 <ul>
 <li>To write the Bad Records without padding incorrect values with NULL in the 
raw csv (set in the parameter <strong>carbon.badRecords.location</strong>), set 
the following in the query :</li>
 </ul>
-
 <pre><code>'BAD_RECORDS_ACTION'='REDIRECT'
 </code></pre>
-
 <h2>
 <a id="how-to-ignore-the-bad-records" class="anchor" 
href="#how-to-ignore-the-bad-records" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>How to ignore the 
Bad Records?</h2>
-
 <p>To ignore the Bad Records from getting stored in the raw csv, we need to 
set the following in the query :</p>
-
 <pre><code>'BAD_RECORDS_ACTION'='IGNORE'
 </code></pre>
-
 <h2>
 <a id="how-to-specify-store-location-while-creating-carbon-session" 
class="anchor" 
href="#how-to-specify-store-location-while-creating-carbon-session" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>How to specify store location while creating carbon 
session?</h2>
-
 <p>The store location specified while creating carbon session is used by the 
CarbonData to store the meta data like the schema, dictionary files, dictionary 
meta data and sort indexes.</p>
-
 <p>Try creating <code>carbonsession</code> with <code>storepath</code> 
specified in the following manner :</p>
-
 <pre><code>val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&lt;store_path&gt;)
 </code></pre>
-
 <p>Example:</p>
-
 <pre><code>val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://localhost:9000/carbon/store
 ")
 </code></pre>
-
 <h2>
 <a id="what-is-carbon-lock-type" class="anchor" 
href="#what-is-carbon-lock-type" aria-hidden="true"><span aria-hidden="true" 
class="octicon octicon-link"></span></a>What is Carbon Lock Type?</h2>
-
 <p>The Apache CarbonData acquires lock on the files to prevent concurrent 
operation from modifying the same files. The lock can be of the following types 
depending on the storage location, for HDFS we specify it to be of type 
HDFSLOCK. By default it is set to type LOCALLOCK.
 The property carbon.lock.type configuration specifies the type of lock to be 
acquired during concurrent operations on table. This property can be set with 
the following values :</p>
-
 <ul>
 <li>
 <strong>LOCALLOCK</strong> : This Lock is created on local file system as 
file. This lock is useful when only one spark driver (thrift server) runs on a 
machine and no other CarbonData spark application is launched concurrently.</li>
 <li>
 <strong>HDFSLOCK</strong> : This Lock is created on HDFS file system as file. 
This lock is useful when multiple CarbonData spark applications are launched 
and no ZooKeeper is running on cluster and the HDFS supports, file based 
locking.</li>
 </ul>
-
 <h2>
 <a id="how-to-resolve-abstract-method-error" class="anchor" 
href="#how-to-resolve-abstract-method-error" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>How to resolve 
Abstract Method Error?</h2>
-
 <p>In order to build CarbonData project it is necessary to specify the spark 
profile. The spark profile sets the Spark Version. You need to specify the 
<code>spark version</code> while using Maven to build project.</p>
 </div>
 </div>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/content/file-structure-of-carbondata.html 
b/content/file-structure-of-carbondata.html
index aa0040c..564486b 100644
--- a/content/file-structure-of-carbondata.html
+++ b/content/file-structure-of-carbondata.html
@@ -156,36 +156,26 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="carbondata-file-structure" class="anchor" 
href="#carbondata-file-structure" aria-hidden="true"><span aria-hidden="true" 
class="octicon octicon-link"></span></a>CarbonData File Structure</h1>
-
 <p>CarbonData files contain groups of data called blocklets, along with all 
required information like schema, offsets and indices etc, in a file header and 
footer, co-located in HDFS.</p>
-
 <p>The file footer can be read once to build the indices in memory, which can 
be utilized for optimizing the scans and processing for all subsequent 
queries.</p>
-
 <h3>
 <a id="understanding-carbondata-file-structure" class="anchor" 
href="#understanding-carbondata-file-structure" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Understanding 
CarbonData File Structure</h3>
-
 <ul>
-<li>Block : It would be as same as HDFS block, CarbonData creates one file for 
each data block, user can specify TABLE_BLOCKSIZE during creation table. Each 
file contains File Header, Blocklets and File Footer. </li>
+<li>Block : It would be as same as HDFS block, CarbonData creates one file for 
each data block, user can specify TABLE_BLOCKSIZE during creation table. Each 
file contains File Header, Blocklets and File Footer.</li>
 </ul>
-
 <p><a href="../docs/images/carbon_data_file_structure_new.png?raw=true" 
target="_blank"><img 
src="https://github.com/apache/incubator-carbondata/blob/master/docs/images/carbon_data_file_structure_new.png?raw=true";
 alt="CarbonData File Structure" style="max-width:100%;"></a></p>
-
 <ul>
 <li>File Header : It contains CarbonData file version number, list of column 
schema and schema updation timestamp.</li>
 <li>File Footer : it contains Number of rows, segmentinfo ,all blocklets? info 
and index, you can find the detail from the below diagram.</li>
 <li>Blocklet : Rows are grouped to form a blocklet, the size of the blocklet 
is configurable and default size is 64MB, Blocklet contains Column Page groups 
for each column.</li>
 <li>Column Page Group : Data of one column and it is further divided to pages, 
it is guaranteed to be contiguous in file.</li>
-<li>Page : It has the data of one column and the number of row is fixed to 
32000 size. </li>
+<li>Page : It has the data of one column and the number of row is fixed to 
32000 size.</li>
 </ul>
-
 <p><a href="../docs/images/carbon_data_format_new.png?raw=true" 
target="_blank"><img 
src="https://github.com/apache/incubator-carbondata/blob/master/docs/images/carbon_data_format_new.png?raw=true";
 alt="CarbonData File Format" style="max-width:100%;"></a></p>
-
 <h3>
 <a id="each-page-contains-three-types-of-data" class="anchor" 
href="#each-page-contains-three-types-of-data" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Each page contains 
three types of data</h3>
-
 <ul>
 <li>Data Page: Contains the encoded data of a column of columns.</li>
 <li>Row ID Page (optional): Contains the row ID mappings used when the data 
page is stored as an inverted index.</li>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/installation-guide.html
----------------------------------------------------------------------
diff --git a/content/installation-guide.html b/content/installation-guide.html
index c13946f..717ca62 100644
--- a/content/installation-guide.html
+++ b/content/installation-guide.html
@@ -156,51 +156,53 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="installation-guide" class="anchor" href="#installation-guide" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Installation Guide</h1>
-
 <p>This tutorial guides you through the installation and configuration of 
CarbonData in the following two modes :</p>
-
 <ul>
 <li><a 
href="#installing-and-configuring-carbondata-on-standalone-spark-cluster">Installing
 and Configuring CarbonData on Standalone Spark Cluster</a></li>
 <li><a 
href="#installing-and-configuring-carbondata-on-spark-on-yarn-cluster">Installing
 and Configuring CarbonData on ?Spark on YARN? Cluster</a></li>
 </ul>
-
 <p>followed by :</p>
-
 <ul>
 <li><a href="#query-execution-using-carbondata-thrift-server">Query Execution 
using CarbonData Thrift Server</a></li>
 </ul>
-
 <h2>
 <a id="installing-and-configuring-carbondata-on-standalone-spark-cluster" 
class="anchor" 
href="#installing-and-configuring-carbondata-on-standalone-spark-cluster" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Installing and Configuring CarbonData on Standalone 
Spark Cluster</h2>
-
 <h3>
 <a id="prerequisites" class="anchor" href="#prerequisites" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Prerequisites</h3>
-
 <ul>
-<li><p>Hadoop HDFS and Yarn should be installed and running.</p></li>
-<li><p>Spark should be installed and running on all the cluster nodes.</p></li>
-<li><p>CarbonData user should have permission to access HDFS.</p></li>
+<li>
+<p>Hadoop HDFS and Yarn should be installed and running.</p>
+</li>
+<li>
+<p>Spark should be installed and running on all the cluster nodes.</p>
+</li>
+<li>
+<p>CarbonData user should have permission to access HDFS.</p>
+</li>
 </ul>
-
 <h3>
 <a id="procedure" class="anchor" href="#procedure" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Procedure</h3>
-
 <ol>
-<li><p><a 
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target=_blank>Build the CarbonData</a> project and get the assembly jar from 
<code>./assembly/target/scala-2.1x/carbondata_xxx.jar</code>. </p></li>
+<li>
+<p><a 
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target=_blank>Build the CarbonData</a> project and get the assembly jar from 
<code>./assembly/target/scala-2.1x/carbondata_xxx.jar</code>.</p>
+</li>
 <li>
 <p>Copy <code>./assembly/target/scala-2.1x/carbondata_xxx.jar</code> to 
<code>$SPARK_HOME/carbonlib</code> folder.</p>
-
 <p><strong>NOTE</strong>: Create the carbonlib folder if it does not exist 
inside <code>$SPARK_HOME</code> path.</p>
 </li>
-<li><p>Add the carbonlib folder path in the Spark classpath. (Edit 
<code>$SPARK_HOME/conf/spark-env.sh</code> file and modify the value of 
<code>SPARK_CLASSPATH</code> by appending <code>$SPARK_HOME/carbonlib/*</code> 
to the existing value)</p></li>
-<li><p>Copy the <code>./conf/carbon.properties.template</code> file from 
CarbonData repository to <code>$SPARK_HOME/conf/</code> folder and rename the 
file to <code>carbon.properties</code>.</p></li>
-<li><p>Repeat Step 2 to Step 5 in all the nodes of the cluster.</p></li>
+<li>
+<p>Add the carbonlib folder path in the Spark classpath. (Edit 
<code>$SPARK_HOME/conf/spark-env.sh</code> file and modify the value of 
<code>SPARK_CLASSPATH</code> by appending <code>$SPARK_HOME/carbonlib/*</code> 
to the existing value)</p>
+</li>
+<li>
+<p>Copy the <code>./conf/carbon.properties.template</code> file from 
CarbonData repository to <code>$SPARK_HOME/conf/</code> folder and rename the 
file to <code>carbon.properties</code>.</p>
+</li>
+<li>
+<p>Repeat Step 2 to Step 5 in all the nodes of the cluster.</p>
+</li>
 <li>
 <p>In Spark node[master], configure the properties mentioned in the following 
table in <code>$SPARK_HOME/conf/spark-defaults.conf</code> file.</p>
-
 <table>
 <thead>
 <tr>
@@ -225,7 +227,6 @@
 </li>
 <li>
 <p>Add the following properties in 
<code>$SPARK_HOME/conf/carbon.properties</code> file:</p>
-
 <table>
 <thead>
 <tr>
@@ -249,46 +250,36 @@
 </li>
 <li>
 <p>Verify the installation. For example:</p>
-
 <pre><code>./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 
2
 --executor-memory 2G
 </code></pre>
 </li>
 </ol>
-
 <p><strong>NOTE</strong>: Make sure you have permissions for CarbonData JARs 
and files through which driver and executor will start.</p>
-
 <p>To get started with CarbonData : <a href="quick-start-guide.html">Quick 
Start</a>, <a href="ddl-operation-on-carbondata.html">DDL Operations on 
CarbonData</a></p>
-
 <h2>
 <a id="installing-and-configuring-carbondata-on-spark-on-yarn-cluster" 
class="anchor" 
href="#installing-and-configuring-carbondata-on-spark-on-yarn-cluster" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Installing and Configuring CarbonData on "Spark on 
YARN" Cluster</h2>
-
 <p>This section provides the procedure to install CarbonData on "Spark on 
YARN" cluster.</p>
-
 <h3>
 <a id="prerequisites-1" class="anchor" href="#prerequisites-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Prerequisites</h3>
-
 <ul>
 <li>Hadoop HDFS and Yarn should be installed and running.</li>
 <li>Spark should be installed and running in all the clients.</li>
 <li>CarbonData user should have permission to access HDFS.</li>
 </ul>
-
 <h3>
 <a id="procedure-1" class="anchor" href="#procedure-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Procedure</h3>
-
 <p>The following steps are only for Driver Nodes. (Driver nodes are the one 
which starts the spark context.)</p>
-
 <ol>
 <li>
 <p><a 
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target=_blank>Build the CarbonData</a> project and get the assembly jar from 
<code>./assembly/target/scala-2.1x/carbondata_xxx.jar</code> and copy to 
<code>$SPARK_HOME/carbonlib</code> folder.</p>
-
 <p><strong>NOTE</strong>: Create the carbonlib folder if it does not exists 
inside <code>$SPARK_HOME</code> path.</p>
 </li>
-<li><p>Copy the <code>./conf/carbon.properties.template</code> file from 
CarbonData repository to <code>$SPARK_HOME/conf/</code> folder and rename the 
file to <code>carbon.properties</code>.</p></li>
+<li>
+<p>Copy the <code>./conf/carbon.properties.template</code> file from 
CarbonData repository to <code>$SPARK_HOME/conf/</code> folder and rename the 
file to <code>carbon.properties</code>.</p>
+</li>
 <li>
 <p>Create <code>tar,gz</code> file of carbonlib folder and move it inside the 
carbonlib folder.</p>
-
 <pre><code>cd $SPARK_HOME
 tar -zcvf carbondata.tar.gz carbonlib/
 mv carbondata.tar.gz carbonlib/
@@ -296,7 +287,6 @@ mv carbondata.tar.gz carbonlib/
 </li>
 <li>
 <p>Configure the properties mentioned in the following table in 
<code>$SPARK_HOME/conf/spark-defaults.conf</code> file.</p>
-
 <table>
 <thead>
 <tr>
@@ -346,7 +336,6 @@ mv carbondata.tar.gz carbonlib/
 </li>
 <li>
 <p>Add the following properties in 
<code>$SPARK_HOME/conf/carbon.properties</code>:</p>
-
 <table>
 <thead>
 <tr>
@@ -370,32 +359,23 @@ mv carbondata.tar.gz carbonlib/
 </li>
 <li>
 <p>Verify the installation.</p>
-
-<pre><code> ./bin/spark-shell --master yarn-client --driver-memory 1g
- --executor-cores 2 --executor-memory 2G
+<pre><code>  ./bin/spark-shell --master yarn-client --driver-memory 1g
+  --executor-cores 2 --executor-memory 2G
 </code></pre>
-
-<p><strong>NOTE</strong>: Make sure you have permissions for CarbonData JARs 
and files through which driver and executor will start.</p>
-
-<p>Getting started with CarbonData : <a href="quick-start-guide.html">Quick 
Start</a>, <a href="ddl-operation-on-carbondata.html">DDL Operations on 
CarbonData</a></p>
 </li>
 </ol>
-
+<p><strong>NOTE</strong>: Make sure you have permissions for CarbonData JARs 
and files through which driver and executor will start.</p>
+<p>Getting started with CarbonData : <a href="quick-start-guide.html">Quick 
Start</a>, <a href="ddl-operation-on-carbondata.html">DDL Operations on 
CarbonData</a></p>
 <h2>
 <a id="query-execution-using-carbondata-thrift-server" class="anchor" 
href="#query-execution-using-carbondata-thrift-server" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Query Execution 
Using CarbonData Thrift Server</h2>
-
 <h3>
 <a id="starting-carbondata-thrift-server" class="anchor" 
href="#starting-carbondata-thrift-server" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Starting CarbonData 
Thrift Server.</h3>
-
 <p>a. cd <code>$SPARK_HOME</code></p>
-
 <p>b. Run the following command to start the CarbonData thrift server.</p>
-
-<pre><code>   ./bin/spark-submit --conf 
spark.sql.hive.thriftServer.singleSession=true
-   --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-   $SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
+<pre><code>./bin/spark-submit --conf 
spark.sql.hive.thriftServer.singleSession=true
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
+$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
 </code></pre>
-
 <table>
 <thead>
 <tr>
@@ -417,24 +397,19 @@ mv carbondata.tar.gz carbonlib/
 </tr>
 </tbody>
 </table>
-
 <p><strong>Examples</strong></p>
-
 <ul>
 <li>Start with default memory and executors.</li>
 </ul>
-
 <pre><code>./bin/spark-submit --conf 
spark.sql.hive.thriftServer.singleSession=true 
 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
 $SPARK_HOME/carbonlib
 /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
 hdfs://&lt;host_name&gt;:port/user/hive/warehouse/carbon.store
 </code></pre>
-
 <ul>
 <li>Start with Fixed executors and resources.</li>
 </ul>
-
 <pre><code>./bin/spark-submit --conf 
spark.sql.hive.thriftServer.singleSession=true 
 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
 --num-executors 3 --driver-memory 20g --executor-memory 250g 
@@ -443,10 +418,8 @@ 
hdfs://&lt;host_name&gt;:port/user/hive/warehouse/carbon.store
 /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
 hdfs://&lt;host_name&gt;:port/user/hive/warehouse/carbon.store
 </code></pre>
-
 <h3>
 <a id="connecting-to-carbondata-thrift-server-using-beeline" class="anchor" 
href="#connecting-to-carbondata-thrift-server-using-beeline" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Connecting to CarbonData Thrift Server Using 
Beeline.</h3>
-
 <pre><code>     cd $SPARK_HOME
      ./bin/beeline jdbc:hive2://&lt;thrftserver_host&gt;:port
 

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/content/quick-start-guide.html b/content/quick-start-guide.html
index 0c58684..0246f64 100644
--- a/content/quick-start-guide.html
+++ b/content/quick-start-guide.html
@@ -156,21 +156,17 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="quick-start" class="anchor" href="#quick-start" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Quick Start</h1>
-
 <p>This tutorial provides a quick introduction to using CarbonData.</p>
-
 <h2>
 <a id="prerequisites" class="anchor" href="#prerequisites" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Prerequisites</h2>
-
 <ul>
 <li>
-<a href="https://github.com/apache/incubator-carbondata/blob/master/build"; 
target=_blank>Installation and building CarbonData</a>.</li>
+<p><a href="https://github.com/apache/incubator-carbondata/blob/master/build"; 
target=_blank>Installation and building CarbonData</a>.</p>
+</li>
 <li>
 <p>Create a sample.csv file using the following commands. The CSV file is 
required for loading data into CarbonData.</p>
-
 <pre><code>cd carbondata
 cat &gt; sample.csv &lt;&lt; EOF
 id,name,city,age
@@ -181,124 +177,82 @@ EOF
 </code></pre>
 </li>
 </ul>
-
 <h2>
 <a id="interactive-analysis-with-spark-shell-version-21" class="anchor" 
href="#interactive-analysis-with-spark-shell-version-21" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Interactive Analysis with Spark Shell Version 2.1</h2>
-
 <p>Apache Spark Shell provides a simple way to learn the API, as well as a 
powerful tool to analyze data interactively. Please visit <a 
href="http://spark.apache.org/docs/latest/"; target=_blank>Apache Spark 
Documentation</a> for more details on Spark shell.</p>
-
 <h4>
 <a id="basics" class="anchor" href="#basics" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Basics</h4>
-
 <p>Start Spark shell by running the following command in the Spark 
directory:</p>
-
 <pre><code>./bin/spark-shell --jars &lt;carbondata assembly jar path&gt;
 </code></pre>
-
 <p><strong>NOTE</strong>: Assembly jar will be available after <a 
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target=_blank>building CarbonData</a> and can be copied from 
<code>./assembly/target/scala-2.1x/carbondata_xxx.jar</code></p>
-
 <p>In this shell, SparkSession is readily available as <code>spark</code> and 
Spark context is readily available as <code>sc</code>.</p>
-
 <p>In order to create a CarbonSession we will have to configure it explicitly 
in the following manner :</p>
-
 <ul>
 <li>Import the following :</li>
 </ul>
-
 <pre><code>import org.apache.spark.sql.SparkSession
 import org.apache.spark.sql.CarbonSession._
 </code></pre>
-
 <ul>
 <li>Create a CarbonSession :</li>
 </ul>
-
 <pre><code>val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("&lt;hdfs 
store path&gt;")
 </code></pre>
-
 <p><strong>NOTE</strong>: By default metastore location is pointed to 
<code>../carbon.metastore</code>, user can provide own metastore location to 
CarbonSession like 
<code>SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("&lt;hdfs
 store path&gt;", "&lt;local metastore path&gt;")</code></p>
-
 <h4>
 <a id="executing-queries" class="anchor" href="#executing-queries" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Executing Queries</h4>
-
 <h6>
 <a id="creating-a-table" class="anchor" href="#creating-a-table" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Creating a Table</h6>
-
 <pre><code>scala&gt;carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id 
string, name string, city string, age Int) STORED BY 'carbondata'")
 </code></pre>
-
 <h6>
 <a id="loading-data-to-a-table" class="anchor" href="#loading-data-to-a-table" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Loading Data to a Table</h6>
-
 <pre><code>scala&gt;carbon.sql("LOAD DATA INPATH 'sample.csv file path' INTO 
TABLE test_table")
 </code></pre>
-
 <p><strong>NOTE</strong>: Please provide the real file path of 
<code>sample.csv</code> for the above script.</p>
-
 <h6>
 <a id="query-data-from-a-table" class="anchor" href="#query-data-from-a-table" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Query Data from a Table</h6>
-
 <pre><code>scala&gt;carbon.sql("SELECT * FROM test_table").show()
 
 scala&gt;carbon.sql("SELECT city, avg(age), sum(age) FROM test_table GROUP BY 
city").show()
 </code></pre>
-
 <h2>
 <a id="interactive-analysis-with-spark-shell-version-16" class="anchor" 
href="#interactive-analysis-with-spark-shell-version-16" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Interactive Analysis with Spark Shell Version 1.6</h2>
-
 <h4>
 <a id="basics-1" class="anchor" href="#basics-1" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Basics</h4>
-
 <p>Start Spark shell by running the following command in the Spark 
directory:</p>
-
 <pre><code>./bin/spark-shell --jars &lt;carbondata assembly jar path&gt;
 </code></pre>
-
 <p><strong>NOTE</strong>: Assembly jar will be available after <a 
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target=_blank>building CarbonData</a> and can be copied from 
<code>./assembly/target/scala-2.1x/carbondata_xxx.jar</code></p>
-
 <p><strong>NOTE</strong>: In this shell, SparkContext is readily available as 
<code>sc</code>.</p>
-
 <ul>
 <li>In order to execute the Queries we need to import CarbonContext:</li>
 </ul>
-
 <pre><code>import org.apache.spark.sql.CarbonContext
 </code></pre>
-
 <ul>
 <li>Create an instance of CarbonContext in the following manner :</li>
 </ul>
-
 <pre><code>val cc = new CarbonContext(sc, "&lt;hdfs store path&gt;")
 </code></pre>
-
 <p><strong>NOTE</strong>: If running on local machine without hdfs, configure 
the local machine's store path instead of hdfs store path</p>
-
 <h4>
 <a id="executing-queries-1" class="anchor" href="#executing-queries-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Executing Queries</h4>
-
 <h6>
 <a id="creating-a-table-1" class="anchor" href="#creating-a-table-1" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Creating a Table</h6>
-
 <pre><code>scala&gt;cc.sql("CREATE TABLE IF NOT EXISTS test_table (id string, 
name string, city string, age Int) STORED BY 'carbondata'")
 </code></pre>
-
 <p>To see the table created :</p>
-
 <pre><code>scala&gt;cc.sql("SHOW TABLES").show()
 </code></pre>
-
 <h6>
 <a id="loading-data-to-a-table-1" class="anchor" 
href="#loading-data-to-a-table-1" aria-hidden="true"><span aria-hidden="true" 
class="octicon octicon-link"></span></a>Loading Data to a Table</h6>
-
 <pre><code>scala&gt;cc.sql("LOAD DATA INPATH 'sample.csv file path' INTO TABLE 
test_table")
 </code></pre>
-
 <p><strong>NOTE</strong>: Please provide the real file path of 
<code>sample.csv</code> for the above script.</p>
-
 <h6>
 <a id="query-data-from-a-table-1" class="anchor" 
href="#query-data-from-a-table-1" aria-hidden="true"><span aria-hidden="true" 
class="octicon octicon-link"></span></a>Query Data from a Table</h6>
-
 <pre><code>scala&gt;cc.sql("SELECT * FROM test_table").show()
 scala&gt;cc.sql("SELECT city, avg(age), sum(age) FROM test_table GROUP BY 
city").show()
 </code></pre>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/2f826c1b/content/supported-data-types-in-carbondata.html
----------------------------------------------------------------------
diff --git a/content/supported-data-types-in-carbondata.html 
b/content/supported-data-types-in-carbondata.html
index 13e640f..b56bc59 100644
--- a/content/supported-data-types-in-carbondata.html
+++ b/content/supported-data-types-in-carbondata.html
@@ -156,17 +156,13 @@
                             <div class="row">
                                 <div class="col-sm-12  col-md-12">
                                     <div>
-
 <h1>
 <a id="data-types" class="anchor" href="#data-types" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Data Types</h1>
-
 <h4>
 <a id="carbondata-supports-the-following-data-types" class="anchor" 
href="#carbondata-supports-the-following-data-types" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>CarbonData supports 
the following data types:</h4>
-
 <ul>
 <li>
 <p>Numeric Types</p>
-
 <ul>
 <li>SMALLINT</li>
 <li>INT/INTEGER</li>
@@ -177,7 +173,6 @@
 </li>
 <li>
 <p>Date/Time Types</p>
-
 <ul>
 <li>TIMESTAMP</li>
 <li>DATE</li>
@@ -185,7 +180,6 @@
 </li>
 <li>
 <p>String Types</p>
-
 <ul>
 <li>STRING</li>
 <li>CHAR</li>
@@ -193,7 +187,6 @@
 </li>
 <li>
 <p>Complex Types</p>
-
 <ul>
 <li>arrays: ARRAY<code>&lt;data_type&gt;</code>
 </li>

Reply via email to