This is an automated email from the ASF dual-hosted git repository.

danny0405 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new c11cf7c  [HUDI-3084] Fix the link of flink guide page (#4410)
c11cf7c is described below

commit c11cf7c9e1ab040538f8e6c69df99a30ef9f15e1
Author: Danny Chan <[email protected]>
AuthorDate: Tue Dec 21 17:01:26 2021 +0800

    [HUDI-3084] Fix the link of flink guide page (#4410)
---
 website/docs/flink-quick-start-guide.md                          | 9 +++++----
 website/versioned_docs/version-0.10.0/flink-quick-start-guide.md | 9 +++++----
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/website/docs/flink-quick-start-guide.md 
b/website/docs/flink-quick-start-guide.md
index 347dbad..d5dd05d 100644
--- a/website/docs/flink-quick-start-guide.md
+++ b/website/docs/flink-quick-start-guide.md
@@ -8,10 +8,11 @@ This guide provides a document at Hudi's capabilities using 
Flink SQL. We can fe
 Reading this guide, you can quickly start using Flink to write to(read from) 
Hudi, have a deeper understanding of configuration and optimization:
 
 - **Quick Start** : Read [Quick Start](#quick-start) to get started quickly 
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Flink Configuration](#flink-configuration), sets up 
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up 
through [Table Option](#table-option).
-- **Writing Data** : Flink supports different writing data use cases, such as 
[Bulk Insert](#bulk-insert), [Index Bootstrap](#index-bootstrap), [Changelog 
Mode](#changelog-mode), [Insert Mode](#insert-mode)  and [Offline 
Compaction](#offline-compaction).
-- **Querying Data** : Flink supports different querying data use cases, such 
as [Hive Query](#hive-query), [Presto Query](#presto-query).
-- **Optimization** : For write/read tasks, this guide gives some optimization 
suggestions, such as [Memory Optimization](#memory-optimization) and [Write 
Rate Limit](#write-rate-limit).
+- **Configuration** : For [Flink 
Configuration](flink_configuration#global-configurations), sets up through 
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through 
[Table Option](flink_configuration#table-options).
+- **Writing Data** : Flink supports different writing data use cases, such as 
[CDC Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk 
Insert](hoodie_deltastreamer#bulk-insert), [Index 
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog 
Mode](hoodie_deltastreamer#changelog-mode) and [Append 
Mode](hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different querying data use cases, such 
as [Incremental Query](hoodie_deltastreamer#incremental-query), [Hive 
Query](syncing_metastore#flink-setup), [Presto 
Query](query_engine_setup#prestodb).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions, 
such as [Memory Optimization](flink_configuration#memory-optimization) and 
[Write Rate Limit](flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline 
Compaction](compaction#flink-offline-compaction).
 
 ## Quick Start
 
diff --git a/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md 
b/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
index 347dbad..d5dd05d 100644
--- a/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
@@ -8,10 +8,11 @@ This guide provides a document at Hudi's capabilities using 
Flink SQL. We can fe
 Reading this guide, you can quickly start using Flink to write to(read from) 
Hudi, have a deeper understanding of configuration and optimization:
 
 - **Quick Start** : Read [Quick Start](#quick-start) to get started quickly 
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Flink Configuration](#flink-configuration), sets up 
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up 
through [Table Option](#table-option).
-- **Writing Data** : Flink supports different writing data use cases, such as 
[Bulk Insert](#bulk-insert), [Index Bootstrap](#index-bootstrap), [Changelog 
Mode](#changelog-mode), [Insert Mode](#insert-mode)  and [Offline 
Compaction](#offline-compaction).
-- **Querying Data** : Flink supports different querying data use cases, such 
as [Hive Query](#hive-query), [Presto Query](#presto-query).
-- **Optimization** : For write/read tasks, this guide gives some optimization 
suggestions, such as [Memory Optimization](#memory-optimization) and [Write 
Rate Limit](#write-rate-limit).
+- **Configuration** : For [Flink 
Configuration](flink_configuration#global-configurations), sets up through 
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through 
[Table Option](flink_configuration#table-options).
+- **Writing Data** : Flink supports different writing data use cases, such as 
[CDC Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk 
Insert](hoodie_deltastreamer#bulk-insert), [Index 
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog 
Mode](hoodie_deltastreamer#changelog-mode) and [Append 
Mode](hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different querying data use cases, such 
as [Incremental Query](hoodie_deltastreamer#incremental-query), [Hive 
Query](syncing_metastore#flink-setup), [Presto 
Query](query_engine_setup#prestodb).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions, 
such as [Memory Optimization](flink_configuration#memory-optimization) and 
[Write Rate Limit](flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline 
Compaction](compaction#flink-offline-compaction).
 
 ## Quick Start
 

Reply via email to