This is an automated email from the ASF dual-hosted git repository.

kunni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-cdc.git


The following commit(s) were added to refs/heads/master by this push:
     new 79683cf22 [FLINK-38836][docs] update section headings and improve 
pipeline configuration options (#4244)
79683cf22 is described below

commit 79683cf226acd58a40c952d260c22d78abbc66c4
Author: Jia Fan <[email protected]>
AuthorDate: Fri Jan 30 14:06:26 2026 +0800

    [FLINK-38836][docs] update section headings and improve pipeline 
configuration options (#4244)
---
 .../docs/connectors/pipeline-connectors/iceberg.md |  2 +-
 .../docs/connectors/pipeline-connectors/paimon.md  |  2 +-
 docs/content.zh/docs/core-concept/data-pipeline.md | 24 ++++++++++++++--------
 .../developer-guide/contribute-to-flink-cdc.md     |  4 ++--
 4 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/docs/content.zh/docs/connectors/pipeline-connectors/iceberg.md 
b/docs/content.zh/docs/connectors/pipeline-connectors/iceberg.md
index 173d17bac..9d97cd831 100644
--- a/docs/content.zh/docs/connectors/pipeline-connectors/iceberg.md
+++ b/docs/content.zh/docs/connectors/pipeline-connectors/iceberg.md
@@ -135,7 +135,7 @@ Pipeline Connector Options
       <td>optional</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
-      <td>Partition keys for each partitioned table, allow setting multiple 
primary keys for multiTables. Each table are separated by ';', and each 
partition key are separated by ','. For example, we can set partition.key of 
two tables by 'testdb.table1:id1,id2;testdb.table2:name'.</td>
+      <td>Partition keys for each partitioned table. Allow setting multiple 
primary keys for multiTables. Tables are separated by ';', and partition keys 
are separated by ','. For example, we can set <code>partition.key</code> of two 
tables using 'testdb.table1:id1,id2;testdb.table2:name'. For partition 
transforms, we can set <code>partition.key</code> using 
'testdb.table1:truncate[10](id);testdb.table2:hour(create_time);testdb.table3:day(create_time);testdb.table4:month(create_time);tes
 [...]
     </tr>
     <tr>
       <td>catalog.properties.*</td>
diff --git a/docs/content.zh/docs/connectors/pipeline-connectors/paimon.md 
b/docs/content.zh/docs/connectors/pipeline-connectors/paimon.md
index 6657cbf54..b269e626f 100644
--- a/docs/content.zh/docs/connectors/pipeline-connectors/paimon.md
+++ b/docs/content.zh/docs/connectors/pipeline-connectors/paimon.md
@@ -207,7 +207,7 @@ Pipeline 连接器配置项
     </tr>
     <tr>
       <td>TIMESTAMP</td>
-      <td>DATETIME</td>
+      <td>TIMESTAMP</td>
       <td></td>
     </tr>
     <tr>
diff --git a/docs/content.zh/docs/core-concept/data-pipeline.md 
b/docs/content.zh/docs/core-concept/data-pipeline.md
index 16007fe6c..a2edc0854 100644
--- a/docs/content.zh/docs/core-concept/data-pipeline.md
+++ b/docs/content.zh/docs/core-concept/data-pipeline.md
@@ -109,12 +109,18 @@ under the License.
 ```
 
 # Pipeline 配置
-下面 是 Data Pipeline 的一些可选配置:
-
-| 参数                     | 含义                                                  
                                                      | optional/required |
-|------------------------|-----------------------------------------------------------------------------------------------------------|-------------------|
-| name                   | 这个 pipeline 的名称,会用在 Flink 集群中作为作业的名称。               
                                                      | optional          |
-| parallelism            | pipeline的全局并发度,默认值是1。                               
                                                      | optional          |
-| local-time-zone        | 作业级别的本地时区。                                          
                                                      | optional          |
-| execution.runtime-mode | pipeline 的运行模式,包含 STREAMING 和 BATCH,默认值是 STREAMING。 
                                                      | optional          |
-| operator.uid.prefix    | Pipeline 中算子 UID 的前缀。如果不设置,Flink 会为每个算子生成唯一的 UID。 
建议设置这个参数以提供稳定和可识别的算子 ID,这有助于有状态升级、问题排查和在 Flink UI 上的诊断。 | optional          |
+下面是 Data Pipeline 级别支持的配置选项。
+请注意,虽然这些参数都是可选的,但至少需要指定其中一个。也就是说,`pipeline` 部分是必需的,不能为空。
+
+| 参数                            | 含义                                           
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+|-------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
+| `name`                        | 这个 pipeline 的名称,会用在 Flink 集群中作为作业的名称。        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| `parallelism`                 | pipeline的全局并发度,默认值是1。                        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| `local-time-zone`             | 作业级别的本地时区。                                   
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| `execution.runtime-mode`      | pipeline 的运行模式,包含 STREAMING 和 BATCH,默认值是 
STREAMING。                                                                      
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
+| `schema.change.behavior`      | 如何处理 [schema 变更]({{< ref 
"docs/core-concept/schema-evolution" >}})。可选值:[`exception`]({{< ref 
"docs/core-concept/schema-evolution" >}}#exception-mode)、[`evolve`]({{< ref 
"docs/core-concept/schema-evolution" >}}#evolve-mode)、[`try_evolve`]({{< ref 
"docs/core-concept/schema-evolution" >}}#tryevolve-mode)、[`lenient`]({{< ref 
"docs/core-concept/schema-evolution" >}}#lenient-mode)(默认值)或 [`ignore`]({{< ref 
"docs/core-concept/schema-evolution" >}}#ignore-mode)。  [...]
+| `schema.operator.uid`         | Schema 算子的唯一 ID。此 ID 
用于算子间通信,必须在所有算子中保持唯一。**已废弃**:请使用 `operator.uid.prefix` 代替。                      
                                                                                
                                                                                
                                                                                
                                                                                
                                      [...]
+| `schema-operator.rpc-timeout` | SchemaOperator 等待下游 SchemaChangeEvent 
应用完成的超时时间,默认值是 3 分钟。                                                            
                                                                                
                                                                                
                                                                                
                                                                                
                     [...]
+| `operator.uid.prefix`         | Pipeline 中算子 UID 的前缀。如果不设置,Flink 会为每个算子生成唯一的 
UID。 建议设置这个参数以提供稳定和可识别的算子 ID,这有助于有状态升级、问题排查和在 Flink UI 上的诊断。                    
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+
+注意:虽然上述参数都是可选的,但至少需要指定其中一个。`pipeline` 部分是必需的,不能为空。
diff --git a/docs/content/docs/developer-guide/contribute-to-flink-cdc.md 
b/docs/content/docs/developer-guide/contribute-to-flink-cdc.md
index ecc76936c..6436276c7 100644
--- a/docs/content/docs/developer-guide/contribute-to-flink-cdc.md
+++ b/docs/content/docs/developer-guide/contribute-to-flink-cdc.md
@@ -45,7 +45,7 @@ project as follows.
 
 Any other question? Reach out to the Dev mail list to get help!
 
-<h2 id="code-review-guide">Code Contribution Guide</h2>
+<h2 id="code-contribution-guide">Code Contribution Guide</h2>
 
 Flink CDC is maintained, improved, and extended by code contributions of 
volunteers. We welcome contributions.
 
@@ -63,7 +63,7 @@ If you would like to contribute to Flink CDC, you could raise 
it as follows.
 4. Find a reviewer to review your PR and make sure the CI passed
 5. A committer of Flink CDC checks if the contribution fulfills the 
requirements and merges the code to the codebase.
 
-<h2 id="code-contribution-guide">Code Review Guide</h2>
+<h2 id="code-review-guide">Code Review Guide</h2>
 
 Every review needs to check the following aspects. 
 

Reply via email to