Repository: carbondata
Updated Branches:
  refs/heads/master d5e3000d1 -> f8db66d6b


Fixed linking and content issues


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d055399d
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d055399d
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d055399d

Branch: refs/heads/master
Commit: d055399d7892845fe9b687e65ef45d68e98e4c16
Parents: d5e3000
Author: jatin <jatin.de...@knoldus.in>
Authored: Thu Jun 15 13:18:48 2017 +0530
Committer: chenliang613 <chenliang...@apache.org>
Committed: Mon Jun 19 15:51:28 2017 +0800

----------------------------------------------------------------------
 docs/faq.md                       | 2 +-
 docs/useful-tips-on-carbondata.md | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d055399d/docs/faq.md
----------------------------------------------------------------------
diff --git a/docs/faq.md b/docs/faq.md
index 88db7d5..45fd960 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -80,7 +80,7 @@ The property carbon.lock.type configuration specifies the 
type of lock to be acq
 In order to build CarbonData project it is necessary to specify the spark 
profile. The spark profile sets the Spark Version. You need to specify the 
``spark version`` while using Maven to build project.
 
 ## How Carbon will behave when execute insert operation in abnormal scenarios?
-Carbon support insert operation, you can refer to the syntax mentioned in [DML 
Operations on 
CarbonData](http://carbondata.apache.org/dml-operation-on-carbondata).
+Carbon support insert operation, you can refer to the syntax mentioned in [DML 
Operations on CarbonData](dml-operation-on-carbondata.md).
 First, create a soucre table in spark-sql and load data into this created 
table.
 
 ```

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d055399d/docs/useful-tips-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/useful-tips-on-carbondata.md 
b/docs/useful-tips-on-carbondata.md
index 9f290c7..06bc12b 100644
--- a/docs/useful-tips-on-carbondata.md
+++ b/docs/useful-tips-on-carbondata.md
@@ -23,7 +23,7 @@ The following sections will elaborate on the above topics :
 
 * [Suggestions to create CarbonData 
Table](#suggestions-to-create-carbondata-table)
 * [Configuration for Optimizing Data Loading performance for Massive 
Data](#configuration-for-optimizing-data-loading-performance-for-massive-data)
-* [Optimizing Mass Data Loading](#optimizing-mass-data-loading)
+* [Optimizing Mass Data 
Loading](#configurations-for-optimizing-carbondata-performance)
 
 
 ## Suggestions to Create CarbonData Table
@@ -209,4 +209,4 @@ scenarios. After the completion of POC, some of the 
configurations impacting the
 | carbon.detail.batch.size | spark/carbonlib/carbon.properties | Data loading 
| The buffer size to store records, returned from the block scan. | In limit 
scenario this parameter is very important. For example your query limit is 
1000. But if we set this value to 3000 that means we get 3000 records from scan 
but spark will only take 1000 rows. So the 2000 remaining are useless. In one 
Finance test case after we set it to 100, in the limit 1000 scenario the 
performance increase about 2 times in comparison to if we set this value to 
12000. |
 | carbon.use.local.dir | spark/carbonlib/carbon.properties | Data loading | 
Whether use YARN local directories for multi-table load disk load balance | If 
this is set it to true CarbonData will use YARN local directories for 
multi-table load disk load balance, that will improve the data load 
performance. |
 
-Note: If your CarbonData instance is provided only for query, you may specify 
the conf 'spark.speculation=true' which is conf in spark.
+Note: If your CarbonData instance is provided only for query, you may specify 
the property 'spark.speculation=true' which is in conf directory of spark.

Reply via email to