This is an automated email from the ASF dual-hosted git repository.
lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/paimon.git
The following commit(s) were added to refs/heads/master by this push:
new da3e79550 [doc] Update doc style to fix minor typos (#4465)
da3e79550 is described below
commit da3e795501fc4d804b5a7daa2168b2f3eef10877
Author: Jarrod <[email protected]>
AuthorDate: Thu Nov 7 12:13:02 2024 +0800
[doc] Update doc style to fix minor typos (#4465)
---
docs/content/engines/doris.md | 4 ++--
docs/content/engines/trino.md | 8 ++++----
docs/content/flink/action-jars.md | 8 ++++----
docs/content/flink/clone-tables.md | 4 ++--
docs/content/flink/expire-partition.md | 2 +-
docs/content/flink/savepoint.md | 10 +++++-----
6 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/docs/content/engines/doris.md b/docs/content/engines/doris.md
index 634e7f7c7..cd778cd57 100644
--- a/docs/content/engines/doris.md
+++ b/docs/content/engines/doris.md
@@ -73,13 +73,13 @@ See [Apache Doris
Website](https://doris.apache.org/docs/lakehouse/datalake-anal
1. Query Paimon table with full qualified name
- ```
+ ```sql
SELECT * FROM paimon_hdfs.paimon_db.paimon_table;
```
2. Switch to Paimon Catalog and query
- ```
+ ```sql
SWITCH paimon_hdfs;
USE paimon_db;
SELECT * FROM paimon_table;
diff --git a/docs/content/engines/trino.md b/docs/content/engines/trino.md
index 0f0fe8b94..05fc47729 100644
--- a/docs/content/engines/trino.md
+++ b/docs/content/engines/trino.md
@@ -34,9 +34,9 @@ Paimon currently supports Trino 420 and above.
## Filesystem
-From version 0.8, paimon share trino filesystem for all actions, which means,
you should
-config trino filesystem before using trino-paimon. You can find information
about how to config
-filesystems for trino on trino official website.
+From version 0.8, Paimon share Trino filesystem for all actions, which means,
you should
+config Trino filesystem before using trino-paimon. You can find information
about how to config
+filesystems for Trino on Trino official website.
## Preparing Paimon Jar File
@@ -113,7 +113,7 @@ If you are using HDFS, choose one of the following ways to
configure your HDFS:
- set environment variable HADOOP_CONF_DIR.
- configure `hadoop-conf-dir` in the properties.
-If you are using a hadoop filesystem, you can still use trino-hdfs and
trino-hive to config it.
+If you are using a Hadoop filesystem, you can still use trino-hdfs and
trino-hive to config it.
For example, if you use oss as a storage, you can write in `paimon.properties`
according to [Trino
Reference](https://trino.io/docs/current/connector/hive.html#hdfs-configuration):
```
diff --git a/docs/content/flink/action-jars.md
b/docs/content/flink/action-jars.md
index de86d1686..34e911ff6 100644
--- a/docs/content/flink/action-jars.md
+++ b/docs/content/flink/action-jars.md
@@ -260,7 +260,7 @@ For more information of 'delete', see
## Drop Partition
-Run the following command to submit a drop_partition job for the table.
+Run the following command to submit a 'drop_partition' job for the table.
```bash
<FLINK_HOME>/bin/flink run \
@@ -276,7 +276,7 @@ partition_spec:
key1=value1,key2=value2...
```
-For more information of drop_partition, see
+For more information of 'drop_partition', see
```bash
<FLINK_HOME>/bin/flink run \
@@ -286,7 +286,7 @@ For more information of drop_partition, see
## Rewrite File Index
-Run the following command to submit a rewrite_file_index job for the table.
+Run the following command to submit a 'rewrite_file_index' job for the table.
```bash
<FLINK_HOME>/bin/flink run \
@@ -297,7 +297,7 @@ Run the following command to submit a rewrite_file_index
job for the table.
[--catalog_conf <paimon-catalog-conf> [--catalog_conf
<paimon-catalog-conf> ...]]
```
-For more information of rewrite_file_index, see
+For more information of 'rewrite_file_index', see
```bash
<FLINK_HOME>/bin/flink run \
diff --git a/docs/content/flink/clone-tables.md
b/docs/content/flink/clone-tables.md
index eec5ebb6d..aed24c3bc 100644
--- a/docs/content/flink/clone-tables.md
+++ b/docs/content/flink/clone-tables.md
@@ -39,10 +39,10 @@ However, if you want to clone the table while writing it at
the same time, submi
```sql
CALL sys.clone(
- warehouse => 'source_warehouse_path`,
+ warehouse => 'source_warehouse_path',
[`database` => 'source_database_name',]
[`table` => 'source_table_name',]
- target_warehouse => 'target_warehouse_path`,
+ target_warehouse => 'target_warehouse_path',
[target_database => 'target_database_name',]
[target_table => 'target_table_name',]
[parallelism => <parallelism>]
diff --git a/docs/content/flink/expire-partition.md
b/docs/content/flink/expire-partition.md
index 3acf6e59d..226017513 100644
--- a/docs/content/flink/expire-partition.md
+++ b/docs/content/flink/expire-partition.md
@@ -134,7 +134,7 @@ More options:
<td><h5>end-input.check-partition-expire</h5></td>
<td style="word-wrap: break-word;">false</td>
<td>Boolean</td>
- <td>Whether check partition expire after batch mode or bounded
stream job finish.</li></ul></td>
+ <td>Whether check partition expire after batch mode or bounded
stream job finish.</td>
</tr>
</tbody>
</table>
diff --git a/docs/content/flink/savepoint.md b/docs/content/flink/savepoint.md
index 16139f0b0..a0934df13 100644
--- a/docs/content/flink/savepoint.md
+++ b/docs/content/flink/savepoint.md
@@ -41,12 +41,12 @@ metadata left. This is very safe, so we recommend using
this feature to stop and
## Tag with Savepoint
-In Flink, we may consume from kafka and then write to paimon. Since flink's
checkpoint only retains a limited number,
+In Flink, we may consume from Kafka and then write to Paimon. Since Flink's
checkpoint only retains a limited number,
we will trigger a savepoint at certain time (such as code upgrades, data
updates, etc.) to ensure that the state can
be retained for a longer time, so that the job can be restored incrementally.
-Paimon's snapshot is similar to flink's checkpoint, and both will
automatically expire, but the tag feature of paimon
-allows snapshots to be retained for a long time. Therefore, we can combine the
two features of paimon's tag and flink's
+Paimon's snapshot is similar to Flink's checkpoint, and both will
automatically expire, but the tag feature of Paimon
+allows snapshots to be retained for a long time. Therefore, we can combine the
two features of Paimon's tag and Flink's
savepoint to achieve incremental recovery of job from the specified savepoint.
{{< hint warning >}}
@@ -64,7 +64,7 @@ You can set `sink.savepoint.auto-tag` to `true` to enable the
feature of automat
**Step 2: Trigger savepoint.**
-You can refer to [flink
savepoint](https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/state/savepoints/#operations)
+You can refer to [Flink
savepoint](https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/state/savepoints/#operations)
to learn how to configure and trigger savepoint.
**Step 3: Choose the tag corresponding to the savepoint.**
@@ -74,7 +74,7 @@ The tag corresponding to the savepoint will be named in the
form of `savepoint-$
**Step 4: Rollback the paimon table.**
-[Rollback]({{< ref "maintenance/manage-tags#rollback-to-tag" >}}) the paimon
table to the specified tag.
+[Rollback]({{< ref "maintenance/manage-tags#rollback-to-tag" >}}) the Paimon
table to the specified tag.
**Step 5: Restart from the savepoint.**