This is an automated email from the ASF dual-hosted git repository.
kunalkapoor pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git
The following commit(s) were added to refs/heads/master by this push:
new 52c31bf [HOTFIX] Correct links in documentation
52c31bf is described below
commit 52c31bfdf51aadece22b5a6b8549ab043879a017
Author: Raghunandan S <[email protected]>
AuthorDate: Mon Oct 28 22:31:00 2019 +0530
[HOTFIX] Correct links in documentation
Correct links in documentation
This closes #3423
---
docs/datamap/mv-datamap-guide.md | 2 +-
docs/index-server.md | 6 +++---
docs/quick-start-guide.md | 8 ++++----
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/docs/datamap/mv-datamap-guide.md b/docs/datamap/mv-datamap-guide.md
index fc1ffd5..4849a89 100644
--- a/docs/datamap/mv-datamap-guide.md
+++ b/docs/datamap/mv-datamap-guide.md
@@ -21,7 +21,7 @@
* [MV DataMap](#mv-datamap-introduction)
* [Loading Data](#loading-data)
* [Querying Data](#querying-data)
-* [Compaction](#compacting-mv-tables)
+* [Compaction](#compacting-mv-datamap)
* [Data Management](#data-management-with-mv-tables)
## Quick example
diff --git a/docs/index-server.md b/docs/index-server.md
index 0b888c4..c743184 100644
--- a/docs/index-server.md
+++ b/docs/index-server.md
@@ -42,7 +42,7 @@ information used for pruning.
In IndexServer service a pruning RDD is fired which will take care of the
pruning for that
request. This RDD will be creating tasks based on the number of segments that
are applicable for
pruning. It can happen that the user has specified segments to access for that
table, so only the
-specified segments would be applicable for pruning. Refer:
[query-data-with-specified-segments](https://github.com/apache/carbondata/blob/6e50c1c6fc1d6e82a4faf6dc6e0824299786ccc0/docs/segment-management-on-carbondata.md#query-data-with-specified-segments).
+specified segments would be applicable for pruning. Refer:
[query-data-with-specified-segments](./segment-management-on-carbondata.md#query-data-with-specified-segments).
IndexServer driver would have 2 important tasks, distributing the segments
equally among the
available executors and keeping track of the executor where the segment is
cached.
@@ -95,7 +95,7 @@ The show metacache DDL has a new column called cache location
will indicate whet
from executor or driver. To drop cache the user has to enable/disable the
index server using the
dynamic configuration to clear the cache of the desired location.
-Refer:
[MetaCacheDDL](https://github.com/apache/carbondata/blob/master/docs/ddl-of-carbondata.md#cache)
+Refer: [MetaCacheDDL](./ddl-of-carbondata.md#cache)
## Fallback
In case of any failure the index server would fallback to embedded mode
@@ -138,7 +138,7 @@ The Index Server is a long running service therefore the
'spark.yarn.keytab' and
| carbon.enable.index.server | false | Enable the use of index server
for pruning for the whole application. |
| carbon.index.server.ip | NA | Specify the IP/HOST on which the server
is started. Better to specify the private IP. |
| carbon.index.server.port | NA | The port on which the index server is
started. |
-| carbon.disable.index.server.fallback | false | Whether to enable/disable
fallback for index server. Should be used for testing purposes only. Refer:
[Fallback](#Fallback)|
+| carbon.disable.index.server.fallback | false | Whether to enable/disable
fallback for index server. Should be used for testing purposes only. Refer:
[Fallback](#fallback)|
|carbon.index.server.max.jobname.length|NA|The max length of the job to show
in the index server service UI. For bigger queries this may impact performance
as the whole string would be sent from JDBCServer to IndexServer.|
diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md
index 316fa26..dedba36 100644
--- a/docs/quick-start-guide.md
+++ b/docs/quick-start-guide.md
@@ -53,17 +53,17 @@ CarbonData can be integrated with Spark,Presto and Hive
execution engines. The b
[Installing and Configuring CarbonData on
Presto](#installing-and-configuring-carbondata-on-presto)
#### Hive
-[Installing and Configuring CarbonData on
Hive](https://github.com/apache/carbondata/blob/master/docs/hive-guide.md)
+[Installing and Configuring CarbonData on Hive](./hive-guide.md)
### Integration with Storage Engines
#### HDFS
-[CarbonData supports read and write with
HDFS](https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md#installing-and-configuring-carbondata-on-standalone-spark-cluster)
+[CarbonData supports read and write with
HDFS](#installing-and-configuring-carbondata-on-standalone-spark-cluster)
#### S3
-[CarbonData supports read and write with
S3](https://github.com/apache/carbondata/blob/master/docs/s3-guide.md)
+[CarbonData supports read and write with S3](./s3-guide.md)
#### Alluxio
-[CarbonData supports read and write with
Alluxio](https://github.com/apache/carbondata/blob/master/docs/alluxio-guide.md)
+[CarbonData supports read and write with Alluxio](./alluxio-guide.md)
## Installing and Configuring CarbonData to run locally with Spark Shell