This is an automated email from the ASF dual-hosted git repository.
kunni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-cdc.git
The following commit(s) were added to refs/heads/master by this push:
new adfa20205 [docs] Remove Duplicate Description for
`scan.incremental.snapshot.unbounded-chunk-first.enabled` Parameter in
Documentation (#4016)
adfa20205 is described below
commit adfa202051e8b63023aaf299f5681b03a72d190e
Author: Junbo wang <[email protected]>
AuthorDate: Wed May 14 15:42:26 2025 +0800
[docs] Remove Duplicate Description for
`scan.incremental.snapshot.unbounded-chunk-first.enabled` Parameter in
Documentation (#4016)
---
docs/content/docs/connectors/pipeline-connectors/mysql.md | 11 -----------
1 file changed, 11 deletions(-)
diff --git a/docs/content/docs/connectors/pipeline-connectors/mysql.md
b/docs/content/docs/connectors/pipeline-connectors/mysql.md
index 622a67b93..3cd68675c 100644
--- a/docs/content/docs/connectors/pipeline-connectors/mysql.md
+++ b/docs/content/docs/connectors/pipeline-connectors/mysql.md
@@ -343,17 +343,6 @@ pipeline:
Experimental option, defaults to false.
</td>
</tr>
- <tr>
- <td>scan.incremental.snapshot.unbounded-chunk-first.enabled</td>
- <td>optional</td>
- <td style="word-wrap: break-word;">false</td>
- <td>Boolean</td>
- <td>
- Whether to assign the unbounded chunks first during snapshot reading
phase.<br>
- This might help reduce the risk of the TaskManager experiencing an
out-of-memory (OOM) error when taking a snapshot of the largest unbounded
chunk.<br>
- Experimental option, defaults to false.
- </td>
- </tr>
<tr>
<td>scan.incremental.snapshot.backfill.skip</td>
<td>optional</td>