This is an automated email from the ASF dual-hosted git repository.
danny0405 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 3e2fba8cd07 [DOC] Documentation change to Increase readability for
basic_configurations (#9225)
3e2fba8cd07 is described below
commit 3e2fba8cd077c697a575e0ad21ba4cad72203357
Author: Vijayasarathi Balasubramanian <[email protected]>
AuthorDate: Wed Jul 19 03:53:14 2023 -0400
[DOC] Documentation change to Increase readability for
basic_configurations (#9225)
Update basic_configurations.md
---
website/docs/basic_configurations.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/basic_configurations.md
b/website/docs/basic_configurations.md
index 8a059b000ed..4cb0236093b 100644
--- a/website/docs/basic_configurations.md
+++ b/website/docs/basic_configurations.md
@@ -12,7 +12,7 @@ import TOCInline from '@theme/TOCInline';
This page covers the basic configurations you may use to write/read Hudi
tables. This page only features a subset of the most frequently used
configurations. For a full list of all configs, please visit the [All
Configurations](/docs/configurations) page.
-- [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the
Hudi Spark Datasource, providing ability to define keys/partitioning, pick out
the write operation, specify how to merge records or choosing query type to
read.
+- [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the
Hudi Spark Datasource, providing ability to define keys/partitioning, specify
the write operation, specify how to merge records or choosing query type to
read.
- [**Flink Sql Configs**](#FLINK_SQL): These configs control the Hudi Flink
SQL source/sink connectors, providing ability to define record keys, pick out
the write operation, specify how to merge records, enable/disable asynchronous
compaction or choosing query type to read.
- [**Write Client Configs**](#WRITE_CLIENT): Internally, the Hudi datasource
uses a RDD based HoodieWriteClient API to actually perform writes to storage.
These configs provide deep control over lower level aspects like file sizing,
compression, parallelism, compaction, write schema, cleaning etc. Although Hudi
provides sane defaults, from time-time these configs may need to be tweaked to
optimize for specific workloads.
- [**Metastore and Catalog Sync Configs**](#META_SYNC): Configurations used by
the Hudi to sync metadata to external metastores and catalogs.