yihua commented on code in PR #9258:
URL: https://github.com/apache/hudi/pull/9258#discussion_r1275461682


##########
website/docs/basic_configurations.md:
##########
@@ -1,25 +1,19 @@
 ---
 title: Basic Configurations
 summary: This page covers the basic configurations you may use to write/read 
Hudi tables. This page only features a subset of the most frequently used 
configurations. For a full list of all configs, please visit the [All 
Configurations](/docs/configurations) page.
-last_modified_at: 2023-07-07T17:00:30.473
-hide_table_of_contents: true
+last_modified_at: 2023-07-21T07:02:09.459
 ---
-import TOCInline from '@theme/TOCInline';
 
-<TOCInline toc={toc} minHeadingLevel={2} maxHeadingLevel={5}/>
-
----
 
 This page covers the basic configurations you may use to write/read Hudi 
tables. This page only features a subset of the most frequently used 
configurations. For a full list of all configs, please visit the [All 
Configurations](/docs/configurations) page.
 
-- [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the 
Hudi Spark Datasource, providing ability to define keys/partitioning, specify 
the write operation, specify how to merge records or choosing query type to 
read.
+- [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the 
Hudi Spark Datasource, providing ability to define keys/partitioning, pick out 
the write operation, specify how to merge records or choosing query type to 
read.
 - [**Flink Sql Configs**](#FLINK_SQL): These configs control the Hudi Flink 
SQL source/sink connectors, providing ability to define record keys, pick out 
the write operation, specify how to merge records, enable/disable asynchronous 
compaction or choosing query type to read.
 - [**Write Client Configs**](#WRITE_CLIENT): Internally, the Hudi datasource 
uses a RDD based HoodieWriteClient API to actually perform writes to storage. 
These configs provide deep control over lower level aspects like file sizing, 
compression, parallelism, compaction, write schema, cleaning etc. Although Hudi 
provides sane defaults, from time-time these configs may need to be tweaked to 
optimize for specific workloads.
 - [**Metastore and Catalog Sync Configs**](#META_SYNC): Configurations used by 
the Hudi to sync metadata to external metastores and catalogs.
-- [**Metrics Configs**](#METRICS): These set of configs are used to enable 
monitoring and reporting of key Hudi stats and metrics.
+- [**Metrics Configs**](#METRICS): These set of configs are used to enable 
monitoring and reporting of keyHudi stats and metrics.

Review Comment:
   ```suggestion
   - [**Metrics Configs**](#METRICS): These set of configs are used to enable 
monitoring and reporting of key Hudi stats and metrics.
   ```



##########
website/docs/basic_configurations.md:
##########
@@ -1,25 +1,19 @@
 ---
 title: Basic Configurations
 summary: This page covers the basic configurations you may use to write/read 
Hudi tables. This page only features a subset of the most frequently used 
configurations. For a full list of all configs, please visit the [All 
Configurations](/docs/configurations) page.
-last_modified_at: 2023-07-07T17:00:30.473
-hide_table_of_contents: true
+last_modified_at: 2023-07-21T07:02:09.459
 ---
-import TOCInline from '@theme/TOCInline';
 
-<TOCInline toc={toc} minHeadingLevel={2} maxHeadingLevel={5}/>
-
----
 
 This page covers the basic configurations you may use to write/read Hudi 
tables. This page only features a subset of the most frequently used 
configurations. For a full list of all configs, please visit the [All 
Configurations](/docs/configurations) page.
 
-- [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the 
Hudi Spark Datasource, providing ability to define keys/partitioning, specify 
the write operation, specify how to merge records or choosing query type to 
read.
+- [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the 
Hudi Spark Datasource, providing ability to define keys/partitioning, pick out 
the write operation, specify how to merge records or choosing query type to 
read.
 - [**Flink Sql Configs**](#FLINK_SQL): These configs control the Hudi Flink 
SQL source/sink connectors, providing ability to define record keys, pick out 
the write operation, specify how to merge records, enable/disable asynchronous 
compaction or choosing query type to read.
 - [**Write Client Configs**](#WRITE_CLIENT): Internally, the Hudi datasource 
uses a RDD based HoodieWriteClient API to actually perform writes to storage. 
These configs provide deep control over lower level aspects like file sizing, 
compression, parallelism, compaction, write schema, cleaning etc. Although Hudi 
provides sane defaults, from time-time these configs may need to be tweaked to 
optimize for specific workloads.
 - [**Metastore and Catalog Sync Configs**](#META_SYNC): Configurations used by 
the Hudi to sync metadata to external metastores and catalogs.
-- [**Metrics Configs**](#METRICS): These set of configs are used to enable 
monitoring and reporting of key Hudi stats and metrics.
+- [**Metrics Configs**](#METRICS): These set of configs are used to enable 
monitoring and reporting of keyHudi stats and metrics.

Review Comment:
   Put up a fix on the source: https://github.com/apache/hudi/pull/9295



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to