This is an automated email from the ASF dual-hosted git repository.

kassiez pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 8428fa7b2c [ecosystem](flink) update flink connector faq (#1706)
8428fa7b2c is described below

commit 8428fa7b2ccebfa943f0615fe90cfb32d821663f
Author: wudi <[email protected]>
AuthorDate: Mon Jan 6 10:38:42 2025 +0800

    [ecosystem](flink) update flink connector faq (#1706)
    
    ## Versions
    
    - [x] dev
    - [x] 3.0
    - [x] 2.1
    - [x] 2.0
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/ecosystem/flink-doris-connector.md                       |  3 ++-
 .../current/ecosystem/flink-doris-connector.md                |  3 ++-
 .../version-2.0/ecosystem/flink-doris-connector.md            | 11 +++++------
 .../version-2.1/ecosystem/flink-doris-connector.md            | 11 +++++------
 .../version-3.0/ecosystem/flink-doris-connector.md            | 11 +++++------
 versioned_docs/version-2.0/ecosystem/flink-doris-connector.md |  3 +--
 versioned_docs/version-2.1/ecosystem/flink-doris-connector.md |  3 +--
 versioned_docs/version-3.0/ecosystem/flink-doris-connector.md |  3 +--
 8 files changed, 22 insertions(+), 26 deletions(-)

diff --git a/docs/ecosystem/flink-doris-connector.md 
b/docs/ecosystem/flink-doris-connector.md
index c13efc5108..12d52cb82c 100644
--- a/docs/ecosystem/flink-doris-connector.md
+++ b/docs/ecosystem/flink-doris-connector.md
@@ -998,7 +998,8 @@ from KAFKA_SOURCE;
 
 2. **errCode = 2, detailMessage = transaction [19650] not found**
 
-   This occurs during the Commit stage. The transaction ID recorded in the 
checkpoint has expired on the FE side. When committing again at this time, the 
above error will occur. At this point, it's impossible to start from the 
checkpoint. Subsequently, you can extend the expiration time by modifying the 
`streaming_label_keep_max_second` configuration in `fe.conf`. The default 
expiration time is 12 hours.
+   This occurs during the Commit stage. The transaction ID recorded in the 
checkpoint has expired on the FE side. When committing again at this time, the 
above error will occur. At this point, it's impossible to start from the 
checkpoint. Subsequently, you can extend the expiration time by modifying the 
`streaming_label_keep_max_second` configuration in `fe.conf`. The default 
expiration time is 12 hours. After doris version 2.0, it will also be limited 
by the `label_num_threshold` config [...]
+
 
 3. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
index 3264ca7856..1b2aad6cde 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
@@ -999,7 +999,8 @@ from KAFKA_SOURCE;
 
 2. **errCode = 2, detailMessage = transaction [19650] not found**
 
-   发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。此时无法从 
checkpoint 启动,后续可通过修改 fe.conf 的 streaming_label_keep_max_second 配置来延长过期时间,默认 12 
小时。
+    发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。此时无法从 
checkpoint 启动,后续可通过修改 fe.conf 的 `streaming_label_keep_max_second` 配置来延长过期时间,默认 
12 小时。Doris2.0 版本后还会受到 fe.conf 中 `label_num_threshold` 配置的限制 (默认 2000) 
,可以调大或者改为 -1(-1 表示只受时间限制)。
+
 
 3. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/flink-doris-connector.md
index e1c525bfdf..2d7c84a4e5 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/flink-doris-connector.md
@@ -269,7 +269,7 @@ source.sinkTo(builder.build());
 **CDC 数据流 (JsonDebeziumSchemaSerializer)**
 
 :::info 备注
-上游数据必须符合Debezium数据格式。
+上游数据必须符合 Debezium 数据格式。
 :::
 
 ```java
@@ -376,7 +376,7 @@ ON a.city = c.city
 | Key                         | Default Value | Required | Comment             
                                                                                
                                                                                
                                                                                
                                                            |
 | --------------------------- | ------------- | -------- 
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | sink.label-prefix           | --            | Y        | Stream load 导入使用的 
label 前缀。2pc 场景下要求全局唯一,用来保证 Flink 的 EOS 语义。                                     
                                                                                
                                                                                
                                                              |
-| sink.properties.*           | --            | N        | Stream Load 
的导入参数。<br />例如: 'sink.properties.column_separator' = ', ' 定义列分隔符,  
'sink.properties.escape_delimiters' = 'true' 特殊字符作为分隔符,`\x01`会被转换为二进制的 0x01 。  
<br /><br />JSON 格式导入<br />'sink.properties.format' = 'json' 
'sink.properties.read_json_by_line' = 'true'<br 
/>详细参数参考[这里](../data-operate/import/stream-load-manual.md)。<br /><br />Group 
Commit 模式 <br /> 例如:'sink.properties.group_commit' = 'sync_mode' 设置 group 
commit 为同步模式。fl [...]
+| sink.properties.*           | --            | N        | Stream Load 
的导入参数。<br />例如: 'sink.properties.column_separator' = ', ' 定义列分隔符,  
'sink.properties.escape_delimiters' = 'true' 特殊字符作为分隔符,`\x01`会被转换为二进制的 0x01。  
<br /><br />JSON 格式导入<br />'sink.properties.format' = 'json' 
'sink.properties.read_json_by_line' = 'true'<br 
/>详细参数参考[这里](../data-operate/import/stream-load-manual.md)。<br /><br />Group 
Commit 模式 <br /> 例如:'sink.properties.group_commit' = 'sync_mode' 设置 group 
commit 为同步模式。fli [...]
 | sink.enable-delete          | TRUE          | N        | 是否启用删除。此选项需要 Doris 
表开启批量删除功能 (Doris0.15+ 版本默认开启),只支持 Unique 模型。                                    
                                                                                
                                                                                
                                                             |
 | sink.enable-2pc             | TRUE          | N        | 是否开启两阶段提交 (2pc),默认为 
true,保证 Exactly-Once 
语义。关于两阶段提交可参考[这里](../data-operate/import/stream-load-manual.md)。                
                                                                                
                                                                                
                                       |
 | sink.buffer-size            | 1MB           | N        | 写数据缓存 buffer 
大小,单位字节。不建议修改,默认配置即可                                                            
                                                                                
                                                                                
                                                                   |
@@ -582,7 +582,7 @@ insert into doris_sink select id,name,bank,age from 
cdc_mysql_source;
 | --create-table-only     | 是否只仅仅同步表的结构                                        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                 |
 
 :::info 备注
-1. 同步时需要在 `$FLINK_HOME/lib` 目录下添加对应的 Flink CDC 依赖,比如 
flink-sql-connector-mysql-cdc-${version}.jar,flink-sql-connector-oracle-cdc-${version}.jar
 ,flink-sql-connector-mongodb-cdc-${version}.jar
+1. 同步时需要在 `$FLINK_HOME/lib` 目录下添加对应的 Flink CDC 依赖,比如 
flink-sql-connector-mysql-cdc-${version}.jar,flink-sql-connector-oracle-cdc-${version}.jar,flink-sql-connector-mongodb-cdc-${version}.jar
 2. Connector 24.0.0 之后依赖的 Flink CDC 版本需要在 3.1 以上,如果需使用 Flink CDC 同步 MySQL 和 
Oracle,还需要在 `$FLINK_HOME/lib` 下增加相关的 JDBC 驱动。
 :::
 
@@ -840,8 +840,7 @@ Exactly-Once 场景下,Flink Job 重启时必须从最新的 Checkpoint/Savepo
 
 5. **errCode = 2, detailMessage = transaction [19650] not found**
 
-发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。
-此时无法从 checkpoint 启动,后续可通过修改 fe.conf 的 streaming_label_keep_max_second 
配置来延长过期时间,默认 12 小时。
+发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。此时无法从 
checkpoint 启动,后续可通过修改 fe.conf 的 `streaming_label_keep_max_second` 配置来延长过期时间,默认 
12 小时。Doris2.0 版本后还会受到 fe.conf 中 `label_num_threshold` 配置的限制 (默认 2000) 
,可以调大或者改为 -1(-1 表示只受时间限制)。
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
@@ -859,7 +858,7 @@ Connector1.1.0 版本以前,是攒批写入的,写入均是由数据驱动
 
 9. **tablet writer write failed, tablet_id=190958, txn_id=3505530, err=-235**
 
-通常发生在 Connector1.1.0 之前,是由于写入频率过快,导致版本过多。可以通过设置 sink.batch.size 和 
sink.batch.interval 参数来降低 Streamload 
的频率。在Connector1.1.0之后,默认写入时机是由Checkpoint控制,可以通过增加Checkpoint间隔来降低写入频率。
+通常发生在 Connector1.1.0 之前,是由于写入频率过快,导致版本过多。可以通过设置 sink.batch.size 和 
sink.batch.interval 参数来降低 Streamload 的频率。在 Connector1.1.0 之后,默认写入时机是由 
Checkpoint 控制,可以通过增加 Checkpoint 间隔来降低写入频率。
 
 10. **Flink 导入有脏数据,如何跳过?**
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
index e1c525bfdf..2d7c84a4e5 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
@@ -269,7 +269,7 @@ source.sinkTo(builder.build());
 **CDC 数据流 (JsonDebeziumSchemaSerializer)**
 
 :::info 备注
-上游数据必须符合Debezium数据格式。
+上游数据必须符合 Debezium 数据格式。
 :::
 
 ```java
@@ -376,7 +376,7 @@ ON a.city = c.city
 | Key                         | Default Value | Required | Comment             
                                                                                
                                                                                
                                                                                
                                                            |
 | --------------------------- | ------------- | -------- 
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | sink.label-prefix           | --            | Y        | Stream load 导入使用的 
label 前缀。2pc 场景下要求全局唯一,用来保证 Flink 的 EOS 语义。                                     
                                                                                
                                                                                
                                                              |
-| sink.properties.*           | --            | N        | Stream Load 
的导入参数。<br />例如: 'sink.properties.column_separator' = ', ' 定义列分隔符,  
'sink.properties.escape_delimiters' = 'true' 特殊字符作为分隔符,`\x01`会被转换为二进制的 0x01 。  
<br /><br />JSON 格式导入<br />'sink.properties.format' = 'json' 
'sink.properties.read_json_by_line' = 'true'<br 
/>详细参数参考[这里](../data-operate/import/stream-load-manual.md)。<br /><br />Group 
Commit 模式 <br /> 例如:'sink.properties.group_commit' = 'sync_mode' 设置 group 
commit 为同步模式。fl [...]
+| sink.properties.*           | --            | N        | Stream Load 
的导入参数。<br />例如: 'sink.properties.column_separator' = ', ' 定义列分隔符,  
'sink.properties.escape_delimiters' = 'true' 特殊字符作为分隔符,`\x01`会被转换为二进制的 0x01。  
<br /><br />JSON 格式导入<br />'sink.properties.format' = 'json' 
'sink.properties.read_json_by_line' = 'true'<br 
/>详细参数参考[这里](../data-operate/import/stream-load-manual.md)。<br /><br />Group 
Commit 模式 <br /> 例如:'sink.properties.group_commit' = 'sync_mode' 设置 group 
commit 为同步模式。fli [...]
 | sink.enable-delete          | TRUE          | N        | 是否启用删除。此选项需要 Doris 
表开启批量删除功能 (Doris0.15+ 版本默认开启),只支持 Unique 模型。                                    
                                                                                
                                                                                
                                                             |
 | sink.enable-2pc             | TRUE          | N        | 是否开启两阶段提交 (2pc),默认为 
true,保证 Exactly-Once 
语义。关于两阶段提交可参考[这里](../data-operate/import/stream-load-manual.md)。                
                                                                                
                                                                                
                                       |
 | sink.buffer-size            | 1MB           | N        | 写数据缓存 buffer 
大小,单位字节。不建议修改,默认配置即可                                                            
                                                                                
                                                                                
                                                                   |
@@ -582,7 +582,7 @@ insert into doris_sink select id,name,bank,age from 
cdc_mysql_source;
 | --create-table-only     | 是否只仅仅同步表的结构                                        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                 |
 
 :::info 备注
-1. 同步时需要在 `$FLINK_HOME/lib` 目录下添加对应的 Flink CDC 依赖,比如 
flink-sql-connector-mysql-cdc-${version}.jar,flink-sql-connector-oracle-cdc-${version}.jar
 ,flink-sql-connector-mongodb-cdc-${version}.jar
+1. 同步时需要在 `$FLINK_HOME/lib` 目录下添加对应的 Flink CDC 依赖,比如 
flink-sql-connector-mysql-cdc-${version}.jar,flink-sql-connector-oracle-cdc-${version}.jar,flink-sql-connector-mongodb-cdc-${version}.jar
 2. Connector 24.0.0 之后依赖的 Flink CDC 版本需要在 3.1 以上,如果需使用 Flink CDC 同步 MySQL 和 
Oracle,还需要在 `$FLINK_HOME/lib` 下增加相关的 JDBC 驱动。
 :::
 
@@ -840,8 +840,7 @@ Exactly-Once 场景下,Flink Job 重启时必须从最新的 Checkpoint/Savepo
 
 5. **errCode = 2, detailMessage = transaction [19650] not found**
 
-发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。
-此时无法从 checkpoint 启动,后续可通过修改 fe.conf 的 streaming_label_keep_max_second 
配置来延长过期时间,默认 12 小时。
+发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。此时无法从 
checkpoint 启动,后续可通过修改 fe.conf 的 `streaming_label_keep_max_second` 配置来延长过期时间,默认 
12 小时。Doris2.0 版本后还会受到 fe.conf 中 `label_num_threshold` 配置的限制 (默认 2000) 
,可以调大或者改为 -1(-1 表示只受时间限制)。
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
@@ -859,7 +858,7 @@ Connector1.1.0 版本以前,是攒批写入的,写入均是由数据驱动
 
 9. **tablet writer write failed, tablet_id=190958, txn_id=3505530, err=-235**
 
-通常发生在 Connector1.1.0 之前,是由于写入频率过快,导致版本过多。可以通过设置 sink.batch.size 和 
sink.batch.interval 参数来降低 Streamload 
的频率。在Connector1.1.0之后,默认写入时机是由Checkpoint控制,可以通过增加Checkpoint间隔来降低写入频率。
+通常发生在 Connector1.1.0 之前,是由于写入频率过快,导致版本过多。可以通过设置 sink.batch.size 和 
sink.batch.interval 参数来降低 Streamload 的频率。在 Connector1.1.0 之后,默认写入时机是由 
Checkpoint 控制,可以通过增加 Checkpoint 间隔来降低写入频率。
 
 10. **Flink 导入有脏数据,如何跳过?**
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
index e1c525bfdf..2d7c84a4e5 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
@@ -269,7 +269,7 @@ source.sinkTo(builder.build());
 **CDC 数据流 (JsonDebeziumSchemaSerializer)**
 
 :::info 备注
-上游数据必须符合Debezium数据格式。
+上游数据必须符合 Debezium 数据格式。
 :::
 
 ```java
@@ -376,7 +376,7 @@ ON a.city = c.city
 | Key                         | Default Value | Required | Comment             
                                                                                
                                                                                
                                                                                
                                                            |
 | --------------------------- | ------------- | -------- 
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | sink.label-prefix           | --            | Y        | Stream load 导入使用的 
label 前缀。2pc 场景下要求全局唯一,用来保证 Flink 的 EOS 语义。                                     
                                                                                
                                                                                
                                                              |
-| sink.properties.*           | --            | N        | Stream Load 
的导入参数。<br />例如: 'sink.properties.column_separator' = ', ' 定义列分隔符,  
'sink.properties.escape_delimiters' = 'true' 特殊字符作为分隔符,`\x01`会被转换为二进制的 0x01 。  
<br /><br />JSON 格式导入<br />'sink.properties.format' = 'json' 
'sink.properties.read_json_by_line' = 'true'<br 
/>详细参数参考[这里](../data-operate/import/stream-load-manual.md)。<br /><br />Group 
Commit 模式 <br /> 例如:'sink.properties.group_commit' = 'sync_mode' 设置 group 
commit 为同步模式。fl [...]
+| sink.properties.*           | --            | N        | Stream Load 
的导入参数。<br />例如: 'sink.properties.column_separator' = ', ' 定义列分隔符,  
'sink.properties.escape_delimiters' = 'true' 特殊字符作为分隔符,`\x01`会被转换为二进制的 0x01。  
<br /><br />JSON 格式导入<br />'sink.properties.format' = 'json' 
'sink.properties.read_json_by_line' = 'true'<br 
/>详细参数参考[这里](../data-operate/import/stream-load-manual.md)。<br /><br />Group 
Commit 模式 <br /> 例如:'sink.properties.group_commit' = 'sync_mode' 设置 group 
commit 为同步模式。fli [...]
 | sink.enable-delete          | TRUE          | N        | 是否启用删除。此选项需要 Doris 
表开启批量删除功能 (Doris0.15+ 版本默认开启),只支持 Unique 模型。                                    
                                                                                
                                                                                
                                                             |
 | sink.enable-2pc             | TRUE          | N        | 是否开启两阶段提交 (2pc),默认为 
true,保证 Exactly-Once 
语义。关于两阶段提交可参考[这里](../data-operate/import/stream-load-manual.md)。                
                                                                                
                                                                                
                                       |
 | sink.buffer-size            | 1MB           | N        | 写数据缓存 buffer 
大小,单位字节。不建议修改,默认配置即可                                                            
                                                                                
                                                                                
                                                                   |
@@ -582,7 +582,7 @@ insert into doris_sink select id,name,bank,age from 
cdc_mysql_source;
 | --create-table-only     | 是否只仅仅同步表的结构                                        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                 |
 
 :::info 备注
-1. 同步时需要在 `$FLINK_HOME/lib` 目录下添加对应的 Flink CDC 依赖,比如 
flink-sql-connector-mysql-cdc-${version}.jar,flink-sql-connector-oracle-cdc-${version}.jar
 ,flink-sql-connector-mongodb-cdc-${version}.jar
+1. 同步时需要在 `$FLINK_HOME/lib` 目录下添加对应的 Flink CDC 依赖,比如 
flink-sql-connector-mysql-cdc-${version}.jar,flink-sql-connector-oracle-cdc-${version}.jar,flink-sql-connector-mongodb-cdc-${version}.jar
 2. Connector 24.0.0 之后依赖的 Flink CDC 版本需要在 3.1 以上,如果需使用 Flink CDC 同步 MySQL 和 
Oracle,还需要在 `$FLINK_HOME/lib` 下增加相关的 JDBC 驱动。
 :::
 
@@ -840,8 +840,7 @@ Exactly-Once 场景下,Flink Job 重启时必须从最新的 Checkpoint/Savepo
 
 5. **errCode = 2, detailMessage = transaction [19650] not found**
 
-发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。
-此时无法从 checkpoint 启动,后续可通过修改 fe.conf 的 streaming_label_keep_max_second 
配置来延长过期时间,默认 12 小时。
+发生在 Commit 阶段,checkpoint 里面记录的事务 ID,在 FE 侧已经过期,此时再次 commit 就会出现上述错误。此时无法从 
checkpoint 启动,后续可通过修改 fe.conf 的 `streaming_label_keep_max_second` 配置来延长过期时间,默认 
12 小时。Doris2.0 版本后还会受到 fe.conf 中 `label_num_threshold` 配置的限制 (默认 2000) 
,可以调大或者改为 -1(-1 表示只受时间限制)。
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
@@ -859,7 +858,7 @@ Connector1.1.0 版本以前,是攒批写入的,写入均是由数据驱动
 
 9. **tablet writer write failed, tablet_id=190958, txn_id=3505530, err=-235**
 
-通常发生在 Connector1.1.0 之前,是由于写入频率过快,导致版本过多。可以通过设置 sink.batch.size 和 
sink.batch.interval 参数来降低 Streamload 
的频率。在Connector1.1.0之后,默认写入时机是由Checkpoint控制,可以通过增加Checkpoint间隔来降低写入频率。
+通常发生在 Connector1.1.0 之前,是由于写入频率过快,导致版本过多。可以通过设置 sink.batch.size 和 
sink.batch.interval 参数来降低 Streamload 的频率。在 Connector1.1.0 之后,默认写入时机是由 
Checkpoint 控制,可以通过增加 Checkpoint 间隔来降低写入频率。
 
 10. **Flink 导入有脏数据,如何跳过?**
 
diff --git a/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md
index 24c8d07a47..30719ccd51 100644
--- a/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md
@@ -836,8 +836,7 @@ When Exactly-Once is not required, it can also be solved by 
turning off 2PC comm
 
 5. **errCode = 2, detailMessage = transaction [19650] not found**
 
-Occurred in the Commit phase, the transaction ID recorded in the checkpoint 
has expired on the FE side, and the above error will occur when committing 
again at this time.
-At this time, it cannot be started from the checkpoint, and the expiration 
time can be extended by modifying the streaming_label_keep_max_second 
configuration in fe.conf, which defaults to 12 hours.
+This occurs during the Commit stage. The transaction ID recorded in the 
checkpoint has expired on the FE side. When committing again at this time, the 
above error will occur. At this point, it's impossible to start from the 
checkpoint. Subsequently, you can extend the expiration time by modifying the 
`streaming_label_keep_max_second` configuration in `fe.conf`. The default 
expiration time is 12 hours. After doris version 2.0, it will also be limited 
by the `label_num_threshold` configura [...]
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
diff --git a/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
index 24c8d07a47..30719ccd51 100644
--- a/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
@@ -836,8 +836,7 @@ When Exactly-Once is not required, it can also be solved by 
turning off 2PC comm
 
 5. **errCode = 2, detailMessage = transaction [19650] not found**
 
-Occurred in the Commit phase, the transaction ID recorded in the checkpoint 
has expired on the FE side, and the above error will occur when committing 
again at this time.
-At this time, it cannot be started from the checkpoint, and the expiration 
time can be extended by modifying the streaming_label_keep_max_second 
configuration in fe.conf, which defaults to 12 hours.
+This occurs during the Commit stage. The transaction ID recorded in the 
checkpoint has expired on the FE side. When committing again at this time, the 
above error will occur. At this point, it's impossible to start from the 
checkpoint. Subsequently, you can extend the expiration time by modifying the 
`streaming_label_keep_max_second` configuration in `fe.conf`. The default 
expiration time is 12 hours. After doris version 2.0, it will also be limited 
by the `label_num_threshold` configura [...]
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
diff --git a/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md
index 24c8d07a47..30719ccd51 100644
--- a/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md
@@ -836,8 +836,7 @@ When Exactly-Once is not required, it can also be solved by 
turning off 2PC comm
 
 5. **errCode = 2, detailMessage = transaction [19650] not found**
 
-Occurred in the Commit phase, the transaction ID recorded in the checkpoint 
has expired on the FE side, and the above error will occur when committing 
again at this time.
-At this time, it cannot be started from the checkpoint, and the expiration 
time can be extended by modifying the streaming_label_keep_max_second 
configuration in fe.conf, which defaults to 12 hours.
+This occurs during the Commit stage. The transaction ID recorded in the 
checkpoint has expired on the FE side. When committing again at this time, the 
above error will occur. At this point, it's impossible to start from the 
checkpoint. Subsequently, you can extend the expiration time by modifying the 
`streaming_label_keep_max_second` configuration in `fe.conf`. The default 
expiration time is 12 hours. After doris version 2.0, it will also be limited 
by the `label_num_threshold` configura [...]
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to