This is an automated email from the ASF dual-hosted git repository.
lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-paimon.git
The following commit(s) were added to refs/heads/master by this push:
new ecf1130b8 [doc] Document -D execution.runtime-mode=batch for compaction
ecf1130b8 is described below
commit ecf1130b84a5a9929413e9f98d88b98471427800
Author: Jingsong <[email protected]>
AuthorDate: Fri Sep 15 11:35:47 2023 +0800
[doc] Document -D execution.runtime-mode=batch for compaction
---
docs/content/maintenance/dedicated-compaction.md | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/docs/content/maintenance/dedicated-compaction.md
b/docs/content/maintenance/dedicated-compaction.md
index e3f174015..64b01c59a 100644
--- a/docs/content/maintenance/dedicated-compaction.md
+++ b/docs/content/maintenance/dedicated-compaction.md
@@ -112,6 +112,10 @@ Example: compact table
--catalog-conf s3.secret-key=*****
```
+You can use `-D execution.runtime-mode=batch` to control batch or streaming
mode. If you submit a batch job, all
+current table files will be compacted. If you submit a streaming job, the job
will continuously monitor new changes
+to the table and perform compactions as needed.
+
For more usage of the compact action, see
```bash
@@ -159,7 +163,9 @@ You can run the following command to submit a compaction
job for multiple databa
| continuous.discovery-interval | 10 s | Duration | The discovery
interval of continuous reading.
|
| sink.parallelism | (none) | Integer | Defines a custom
parallelism for the sink. By default, if this option is not defined, the
planner will derive the parallelism for each statement individually by also
considering the global configuration. |
-If you submit a batch job (set `execution.runtime-mode: batch` in Flink's
configuration), all current table files will be compacted. If you submit a
streaming job (set `execution.runtime-mode: streaming` in Flink's
configuration), the job will continuously monitor new changes to the table and
perform compactions as needed.
+You can use `-D execution.runtime-mode=batch` to control batch or streaming
mode. If you submit a batch job, all
+current table files will be compacted. If you submit a streaming job, the job
will continuously monitor new changes
+to the table and perform compactions as needed.
{{< hint info >}}