This is an automated email from the ASF dual-hosted git repository.
bhavanisudha pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new b74ea5d96f [HUDI-4977][DOCS] Fix broken links in Flink quick start
guide (#6972)
b74ea5d96f is described below
commit b74ea5d96fd7efc135d25c230c4662e33a18dd3d
Author: Bhavani Sudha Saktheeswaran <[email protected]>
AuthorDate: Tue Oct 18 11:32:15 2022 -0700
[HUDI-4977][DOCS] Fix broken links in Flink quick start guide (#6972)
---
website/docs/flink-quick-start-guide.md | 14 +++++++-------
...09-28_Data_processing_with_Spark_time_traveling.png | Bin 0 -> 46708 bytes
...he_Hudi_tables_using_AWS_Glue_and_DeltaStreamer.png | Bin 0 -> 183782 bytes
.../version-0.10.0/flink-quick-start-guide.md | 14 +++++++-------
.../version-0.10.1/flink-quick-start-guide.md | 14 +++++++-------
.../version-0.11.0/flink-quick-start-guide.md | 14 +++++++-------
.../version-0.11.1/flink-quick-start-guide.md | 14 +++++++-------
.../version-0.12.0/flink-quick-start-guide.md | 14 +++++++-------
8 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/website/docs/flink-quick-start-guide.md
b/website/docs/flink-quick-start-guide.md
index 5d1c9784ac..e8fb8829e9 100644
--- a/website/docs/flink-quick-start-guide.md
+++ b/website/docs/flink-quick-start-guide.md
@@ -10,12 +10,12 @@ This page introduces Flink-Hudi integration. We can feel
the unique charm of how
This guide helps you quickly start using Flink on Hudi, and learn different
modes for reading/writing Hudi by Flink:
- **Quick Start** : Read [Quick Start](#quick-start) to get started quickly
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Global
Configuration](flink_configuration#global-configurations), sets up through
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through
[Table Option](flink_configuration#table-options).
-- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](hoodie_deltastreamer#changelog-mode) and [Append
Mode](hoodie_deltastreamer#append-mode).
-- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](querying_data#streaming-query) and [Incremental
Query](querying_data#incremental-query).
-- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](flink_configuration#memory-optimization) and
[Write Rate Limit](flink_configuration#write-rate-limit).
-- **Optimization**: Offline compaction is supported [Offline
Compaction](compaction#flink-offline-compaction).
-- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](syncing_metastore#flink-setup), [Presto
Query](query_engine_setup#prestodb).
+- **Configuration** : For [Global
Configuration](/docs/flink_configuration#global-configurations), sets up
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up
through [Table Option](/docs/flink_configuration#table-options).
+- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](/docs/hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](/docs/hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](/docs/hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](/docs/hoodie_deltastreamer#changelog-mode) and [Append
Mode](/docs/hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](/docs/querying_data#streaming-query) and [Incremental
Query](/docs/querying_data#incremental-query).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](/docs/flink_configuration#memory-optimization)
and [Write Rate Limit](/docs/flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline
Compaction](/docs/compaction#flink-offline-compaction).
+- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](/docs/syncing_metastore#flink-setup), [Presto
Query](/docs/query_engine_setup#prestodb).
## Quick Start
@@ -48,7 +48,7 @@ Start a standalone Flink cluster within hadoop environment.
Before you start up the cluster, we suggest to config the cluster as follows:
- in `$FLINK_HOME/conf/flink-conf.yaml`, add config option
`taskmanager.numberOfTaskSlots: 4`
-- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](flink_configuration#global-configurations)
+- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](/docs/flink_configuration#global-configurations)
- in `$FLINK_HOME/conf/workers`, add item `localhost` as 4 lines so that there
are 4 workers on the local cluster
Now starts the cluster:
diff --git
a/website/static/assets/images/blog/2022-09-28_Data_processing_with_Spark_time_traveling.png
b/website/static/assets/images/blog/2022-09-28_Data_processing_with_Spark_time_traveling.png
new file mode 100644
index 0000000000..1d9fa29916
Binary files /dev/null and
b/website/static/assets/images/blog/2022-09-28_Data_processing_with_Spark_time_traveling.png
differ
diff --git
a/website/static/assets/images/blog/2022-10-06_Ingest_streaming_data_to_Apache_Hudi_tables_using_AWS_Glue_and_DeltaStreamer.png
b/website/static/assets/images/blog/2022-10-06_Ingest_streaming_data_to_Apache_Hudi_tables_using_AWS_Glue_and_DeltaStreamer.png
new file mode 100644
index 0000000000..4e9bdbd8ba
Binary files /dev/null and
b/website/static/assets/images/blog/2022-10-06_Ingest_streaming_data_to_Apache_Hudi_tables_using_AWS_Glue_and_DeltaStreamer.png
differ
diff --git a/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
b/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
index 323acad284..0cd257db30 100644
--- a/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
@@ -8,12 +8,12 @@ This guide provides an instruction for Flink Hudi
integration. We can feel the u
Reading this guide, you can quickly start using Flink on Hudi, learn different
modes for reading/writing Hudi by Flink:
- **Quick Start** : Read [Quick Start](#quick-start) to get started quickly
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Global
Configuration](flink_configuration#global-configurations), sets up through
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through
[Table Option](flink_configuration#table-options).
-- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](hoodie_deltastreamer#changelog-mode) and [Append
Mode](hoodie_deltastreamer#append-mode).
-- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](hoodie_deltastreamer#streaming-query) and [Incremental
Query](hoodie_deltastreamer#incremental-query).
-- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](flink_configuration#memory-optimization) and
[Write Rate Limit](flink_configuration#write-rate-limit).
-- **Optimization**: Offline compaction is supported [Offline
Compaction](compaction#flink-offline-compaction).
-- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](syncing_metastore#flink-setup), [Presto
Query](query_engine_setup#prestodb).
+- **Configuration** : For [Global
Configuration](/docs/flink_configuration#global-configurations), sets up
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up
through [Table Option](/docs/flink_configuration#table-options).
+- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](/docs/hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](/docs/hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](/docs/hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](/docs/hoodie_deltastreamer#changelog-mode) and [Append
Mode](/docs/hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](/docs/hoodie_deltastreamer#streaming-query) and [Incremental
Query](/docs/hoodie_deltastreamer#incremental-query).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](/docs/flink_configuration#memory-optimization)
and [Write Rate Limit](/docs/flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline
Compaction](/docs/compaction#flink-offline-compaction).
+- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](/docs/syncing_metastore#flink-setup), [Presto
Query](/docs/query_engine_setup#prestodb).
## Quick Start
@@ -31,7 +31,7 @@ Start a standalone Flink cluster within hadoop environment.
Before you start up the cluster, we suggest to config the cluster as follows:
- in `$FLINK_HOME/conf/flink-conf.yaml`, add config option
`taskmanager.numberOfTaskSlots: 4`
-- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your task](#flink-configuration)
+- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](/docs/flink_configuration#global-configurations)
- in `$FLINK_HOME/conf/workers`, add item `localhost` as 4 lines so that there
are 4 workers on the local cluster
Now starts the cluster:
diff --git a/website/versioned_docs/version-0.10.1/flink-quick-start-guide.md
b/website/versioned_docs/version-0.10.1/flink-quick-start-guide.md
index a723b8ed7b..02b6c8c264 100644
--- a/website/versioned_docs/version-0.10.1/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.10.1/flink-quick-start-guide.md
@@ -8,12 +8,12 @@ This guide provides an instruction for Flink Hudi
integration. We can feel the u
Reading this guide, you can quickly start using Flink on Hudi, learn different
modes for reading/writing Hudi by Flink:
- **Quick Start** : Read [Quick Start](#quick-start) to get started quickly
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Global
Configuration](flink_configuration#global-configurations), sets up through
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through
[Table Option](flink_configuration#table-options).
-- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](hoodie_deltastreamer#changelog-mode) and [Append
Mode](hoodie_deltastreamer#append-mode).
-- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](hoodie_deltastreamer#streaming-query) and [Incremental
Query](hoodie_deltastreamer#incremental-query).
-- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](flink_configuration#memory-optimization) and
[Write Rate Limit](flink_configuration#write-rate-limit).
-- **Optimization**: Offline compaction is supported [Offline
Compaction](compaction#flink-offline-compaction).
-- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](syncing_metastore#flink-setup), [Presto
Query](query_engine_setup#prestodb).
+- **Configuration** : For [Global
Configuration](/docs/flink_configuration#global-configurations), sets up
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up
through [Table Option](/docs/flink_configuration#table-options).
+- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](/docs/hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](/docs/hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](/docs/hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](/docs/hoodie_deltastreamer#changelog-mode) and [Append
Mode](/docs/hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](/docs/hoodie_deltastreamer#streaming-query) and [Incremental
Query](/docs/hoodie_deltastreamer#incremental-query).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](/docs/flink_configuration#memory-optimization)
and [Write Rate Limit](/docs/flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline
Compaction](/docs/compaction#flink-offline-compaction).
+- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](/docs/syncing_metastore#flink-setup), [Presto
Query](/docs/query_engine_setup#prestodb).
## Quick Start
@@ -31,7 +31,7 @@ Start a standalone Flink cluster within hadoop environment.
Before you start up the cluster, we suggest to config the cluster as follows:
- in `$FLINK_HOME/conf/flink-conf.yaml`, add config option
`taskmanager.numberOfTaskSlots: 4`
-- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your task](#flink-configuration)
+- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](/docs/flink_configuration#global-configurations)
- in `$FLINK_HOME/conf/workers`, add item `localhost` as 4 lines so that there
are 4 workers on the local cluster
Now starts the cluster:
diff --git a/website/versioned_docs/version-0.11.0/flink-quick-start-guide.md
b/website/versioned_docs/version-0.11.0/flink-quick-start-guide.md
index e9ca0c3df5..e9869c7fce 100644
--- a/website/versioned_docs/version-0.11.0/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.11.0/flink-quick-start-guide.md
@@ -8,12 +8,12 @@ This page introduces Flink-Hudi integration. We can feel the
unique charm of how
This guide helps you quickly start using Flink on Hudi, and learn different
modes for reading/writing Hudi by Flink:
- **Quick Start** : Read [Quick Start](#quick-start) to get started quickly
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Global
Configuration](flink_configuration#global-configurations), sets up through
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through
[Table Option](flink_configuration#table-options).
-- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](hoodie_deltastreamer#changelog-mode) and [Append
Mode](hoodie_deltastreamer#append-mode).
-- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](hoodie_deltastreamer#streaming-query) and [Incremental
Query](hoodie_deltastreamer#incremental-query).
-- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](flink_configuration#memory-optimization) and
[Write Rate Limit](flink_configuration#write-rate-limit).
-- **Optimization**: Offline compaction is supported [Offline
Compaction](compaction#flink-offline-compaction).
-- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](syncing_metastore#flink-setup), [Presto
Query](query_engine_setup#prestodb).
+- **Configuration** : For [Global
Configuration](/docs/flink_configuration#global-configurations), sets up
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up
through [Table Option](/docs/flink_configuration#table-options).
+- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](/docs/hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](/docs/hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](/docs/hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](/docs/hoodie_deltastreamer#changelog-mode) and [Append
Mode](/docs/hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](/docs/hoodie_deltastreamer#streaming-query) and [Incremental
Query](/docs/hoodie_deltastreamer#incremental-query).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](/docs/flink_configuration#memory-optimization)
and [Write Rate Limit](/docs/flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline
Compaction](/docs/compaction#flink-offline-compaction).
+- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](/docs/syncing_metastore#flink-setup), [Presto
Query](/docs/query_engine_setup#prestodb).
## Quick Start
@@ -38,7 +38,7 @@ Start a standalone Flink cluster within hadoop environment.
Before you start up the cluster, we suggest to config the cluster as follows:
- in `$FLINK_HOME/conf/flink-conf.yaml`, add config option
`taskmanager.numberOfTaskSlots: 4`
-- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](flink_configuration#global-configurations)
+- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](/docs/flink_configuration#global-configurations)
- in `$FLINK_HOME/conf/workers`, add item `localhost` as 4 lines so that there
are 4 workers on the local cluster
Now starts the cluster:
diff --git a/website/versioned_docs/version-0.11.1/flink-quick-start-guide.md
b/website/versioned_docs/version-0.11.1/flink-quick-start-guide.md
index e9ca0c3df5..e9869c7fce 100644
--- a/website/versioned_docs/version-0.11.1/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.11.1/flink-quick-start-guide.md
@@ -8,12 +8,12 @@ This page introduces Flink-Hudi integration. We can feel the
unique charm of how
This guide helps you quickly start using Flink on Hudi, and learn different
modes for reading/writing Hudi by Flink:
- **Quick Start** : Read [Quick Start](#quick-start) to get started quickly
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Global
Configuration](flink_configuration#global-configurations), sets up through
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through
[Table Option](flink_configuration#table-options).
-- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](hoodie_deltastreamer#changelog-mode) and [Append
Mode](hoodie_deltastreamer#append-mode).
-- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](hoodie_deltastreamer#streaming-query) and [Incremental
Query](hoodie_deltastreamer#incremental-query).
-- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](flink_configuration#memory-optimization) and
[Write Rate Limit](flink_configuration#write-rate-limit).
-- **Optimization**: Offline compaction is supported [Offline
Compaction](compaction#flink-offline-compaction).
-- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](syncing_metastore#flink-setup), [Presto
Query](query_engine_setup#prestodb).
+- **Configuration** : For [Global
Configuration](/docs/flink_configuration#global-configurations), sets up
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up
through [Table Option](/docs/flink_configuration#table-options).
+- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](/docs/hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](/docs/hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](/docs/hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](/docs/hoodie_deltastreamer#changelog-mode) and [Append
Mode](/docs/hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](/docs/hoodie_deltastreamer#streaming-query) and [Incremental
Query](/docs/hoodie_deltastreamer#incremental-query).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](/docs/flink_configuration#memory-optimization)
and [Write Rate Limit](/docs/flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline
Compaction](/docs/compaction#flink-offline-compaction).
+- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](/docs/syncing_metastore#flink-setup), [Presto
Query](/docs/query_engine_setup#prestodb).
## Quick Start
@@ -38,7 +38,7 @@ Start a standalone Flink cluster within hadoop environment.
Before you start up the cluster, we suggest to config the cluster as follows:
- in `$FLINK_HOME/conf/flink-conf.yaml`, add config option
`taskmanager.numberOfTaskSlots: 4`
-- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](flink_configuration#global-configurations)
+- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](/docs/flink_configuration#global-configurations)
- in `$FLINK_HOME/conf/workers`, add item `localhost` as 4 lines so that there
are 4 workers on the local cluster
Now starts the cluster:
diff --git a/website/versioned_docs/version-0.12.0/flink-quick-start-guide.md
b/website/versioned_docs/version-0.12.0/flink-quick-start-guide.md
index 2f33027cf6..d0dd5cdf10 100644
--- a/website/versioned_docs/version-0.12.0/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.12.0/flink-quick-start-guide.md
@@ -10,12 +10,12 @@ This page introduces Flink-Hudi integration. We can feel
the unique charm of how
This guide helps you quickly start using Flink on Hudi, and learn different
modes for reading/writing Hudi by Flink:
- **Quick Start** : Read [Quick Start](#quick-start) to get started quickly
Flink sql client to write to(read from) Hudi.
-- **Configuration** : For [Global
Configuration](flink_configuration#global-configurations), sets up through
`$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up through
[Table Option](flink_configuration#table-options).
-- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](hoodie_deltastreamer#changelog-mode) and [Append
Mode](hoodie_deltastreamer#append-mode).
-- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](querying_data#streaming-query) and [Incremental
Query](querying_data#incremental-query).
-- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](flink_configuration#memory-optimization) and
[Write Rate Limit](flink_configuration#write-rate-limit).
-- **Optimization**: Offline compaction is supported [Offline
Compaction](compaction#flink-offline-compaction).
-- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](syncing_metastore#flink-setup), [Presto
Query](query_engine_setup#prestodb).
+- **Configuration** : For [Global
Configuration](/docs/flink_configuration#global-configurations), sets up
through `$FLINK_HOME/conf/flink-conf.yaml`. For per job configuration, sets up
through [Table Option](/docs/flink_configuration#table-options).
+- **Writing Data** : Flink supports different modes for writing, such as [CDC
Ingestion](/docs/hoodie_deltastreamer#cdc-ingestion), [Bulk
Insert](/docs/hoodie_deltastreamer#bulk-insert), [Index
Bootstrap](/docs/hoodie_deltastreamer#index-bootstrap), [Changelog
Mode](/docs/hoodie_deltastreamer#changelog-mode) and [Append
Mode](/docs/hoodie_deltastreamer#append-mode).
+- **Querying Data** : Flink supports different modes for reading, such as
[Streaming Query](/docs/querying_data#streaming-query) and [Incremental
Query](/docs/querying_data#incremental-query).
+- **Tuning** : For write/read tasks, this guide gives some tuning suggestions,
such as [Memory Optimization](/docs/flink_configuration#memory-optimization)
and [Write Rate Limit](/docs/flink_configuration#write-rate-limit).
+- **Optimization**: Offline compaction is supported [Offline
Compaction](/docs/compaction#flink-offline-compaction).
+- **Query Engines**: Besides Flink, many other engines are integrated: [Hive
Query](/docs/syncing_metastore#flink-setup), [Presto
Query](/docs/query_engine_setup#prestodb).
## Quick Start
@@ -48,7 +48,7 @@ Start a standalone Flink cluster within hadoop environment.
Before you start up the cluster, we suggest to config the cluster as follows:
- in `$FLINK_HOME/conf/flink-conf.yaml`, add config option
`taskmanager.numberOfTaskSlots: 4`
-- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](flink_configuration#global-configurations)
+- in `$FLINK_HOME/conf/flink-conf.yaml`, [add other global configurations
according to the characteristics of your
task](/docs/flink_configuration#global-configurations)
- in `$FLINK_HOME/conf/workers`, add item `localhost` as 4 lines so that there
are 4 workers on the local cluster
Now starts the cluster: