This is an automated email from the ASF dual-hosted git repository.

etudenhoefner pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/main by this push:
     new fd7f6b708c Docs: Enhance Flink pages (#9919)
fd7f6b708c is described below

commit fd7f6b708c12ce8ca7cb72e89fa65c04a686e73b
Author: Manu Zhang <[email protected]>
AuthorDate: Mon Mar 11 18:32:27 2024 +0800

    Docs: Enhance Flink pages (#9919)
    
    1. Fix internal links
    2. Remove period in title
    3. Fix numbered list with code blocks
    4. Add identifier fields as an approach for upsert mode
---
 docs/docs/flink-actions.md   |  4 ++--
 docs/docs/flink-connector.md |  6 +++---
 docs/docs/flink-ddl.md       |  2 +-
 docs/docs/flink-queries.md   |  2 +-
 docs/docs/flink-writes.md    | 30 +++++++++++++++---------------
 docs/docs/flink.md           |  2 +-
 6 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/docs/docs/flink-actions.md b/docs/docs/flink-actions.md
index ca67ef0e5f..4e54732c3b 100644
--- a/docs/docs/flink-actions.md
+++ b/docs/docs/flink-actions.md
@@ -18,9 +18,9 @@ title: "Flink Actions"
  - limitations under the License.
  -->
 
-## Rewrite files action.
+## Rewrite files action
 
-Iceberg provides API to rewrite small files into large files by submitting 
Flink batch jobs. The behavior of this Flink action is the same as Spark's 
[rewriteDataFiles](maintenance.md#compact-data-files).
+Iceberg provides API to rewrite small files into large files by submitting 
Flink batch jobs. The behavior of this Flink action is the same as Spark's 
[rewriteDataFiles](../maintenance.md#compact-data-files).
 
 ```java
 import org.apache.iceberg.flink.actions.Actions;
diff --git a/docs/docs/flink-connector.md b/docs/docs/flink-connector.md
index 025e9aee92..260a5c5814 100644
--- a/docs/docs/flink-connector.md
+++ b/docs/docs/flink-connector.md
@@ -29,13 +29,13 @@ To create the table in Flink SQL by using SQL syntax 
`CREATE TABLE test (..) WIT
 * `connector`: Use the constant `iceberg`.
 * `catalog-name`: User-specified catalog name. It's required because the 
connector don't have any default value.
 * `catalog-type`: `hive` or `hadoop` for built-in catalogs (defaults to 
`hive`), or left unset for custom catalog implementations using `catalog-impl`.
-* `catalog-impl`: The fully-qualified class name of a custom catalog 
implementation. Must be set if `catalog-type` is unset. See also [custom 
catalog](flink.md#adding-catalogs) for more details.
+* `catalog-impl`: The fully-qualified class name of a custom catalog 
implementation. Must be set if `catalog-type` is unset. See also [custom 
catalog](../flink.md#adding-catalogs) for more details.
 * `catalog-database`: The iceberg database name in the backend catalog, use 
the current flink database name by default.
 * `catalog-table`: The iceberg table name in the backend catalog. Default to 
use the table name in the flink `CREATE TABLE` sentence.
 
 ## Table managed in Hive catalog.
 
-Before executing the following SQL, please make sure you've configured the 
Flink SQL client correctly according to the [quick start 
documentation](flink.md).
+Before executing the following SQL, please make sure you've configured the 
Flink SQL client correctly according to the [quick start 
documentation](../flink.md).
 
 The following SQL will create a Flink table in the current Flink catalog, 
which maps to the iceberg table `default_database.flink_table` managed in 
iceberg catalog.
 
@@ -138,4 +138,4 @@ SELECT * FROM flink_table;
 3 rows in set
 ```
 
-For more details, please refer to the Iceberg [Flink documentation](flink.md).
+For more details, please refer to the Iceberg [Flink 
documentation](../flink.md).
diff --git a/docs/docs/flink-ddl.md b/docs/docs/flink-ddl.md
index c2b3051fde..681a018865 100644
--- a/docs/docs/flink-ddl.md
+++ b/docs/docs/flink-ddl.md
@@ -150,7 +150,7 @@ Table create commands support the commonly used [Flink 
create clauses](https://n
 
 * `PARTITION BY (column1, column2, ...)` to configure partitioning, Flink does 
not yet support hidden partitioning.
 * `COMMENT 'table document'` to set a table description.
-* `WITH ('key'='value', ...)` to set [table configuration](configuration.md) 
which will be stored in Iceberg table properties.
+* `WITH ('key'='value', ...)` to set [table 
configuration](../configuration.md) which will be stored in Iceberg table 
properties.
 
 Currently, it does not support computed column and watermark definition etc.
 
diff --git a/docs/docs/flink-queries.md b/docs/docs/flink-queries.md
index 431a5554f2..036d95a495 100644
--- a/docs/docs/flink-queries.md
+++ b/docs/docs/flink-queries.md
@@ -75,7 +75,7 @@ SET table.exec.iceberg.use-flip27-source = true;
 
 ### Reading branches and tags with SQL
 Branch and tags can be read via SQL by specifying options. For more details
-refer to [Flink Configuration](flink-configuration.md#read-options)
+refer to [Flink Configuration](../flink-configuration.md#read-options)
 
 ```sql
 --- Read from branch b1
diff --git a/docs/docs/flink-writes.md b/docs/docs/flink-writes.md
index 46bc9bb2c6..c41b367dea 100644
--- a/docs/docs/flink-writes.md
+++ b/docs/docs/flink-writes.md
@@ -59,20 +59,20 @@ Iceberg supports `UPSERT` based on the primary key when 
writing data into v2 tab
 
 1. Enable the `UPSERT` mode as table-level property `write.upsert.enabled`. 
Here is an example SQL statement to set the table property when creating a 
table. It would be applied for all write paths to this table (batch or 
streaming) unless overwritten by write options as described later.
 
-```sql
-CREATE TABLE `hive_catalog`.`default`.`sample` (
-    `id` INT COMMENT 'unique id',
-    `data` STRING NOT NULL,
-    PRIMARY KEY(`id`) NOT ENFORCED
-) with ('format-version'='2', 'write.upsert.enabled'='true');
-```
+    ```sql
+    CREATE TABLE `hive_catalog`.`default`.`sample` (
+        `id` INT COMMENT 'unique id',
+        `data` STRING NOT NULL,
+        PRIMARY KEY(`id`) NOT ENFORCED
+    ) with ('format-version'='2', 'write.upsert.enabled'='true');
+    ```
 
-2. Enabling `UPSERT` mode using `upsert-enabled` in the [write 
options](#write-options) provides more flexibility than a table level config. 
Note that you still need to use v2 table format and specify the primary key 
when creating the table.
+2. Enabling `UPSERT` mode using `upsert-enabled` in the [write 
options](#write-options) provides more flexibility than a table level config. 
Note that you still need to use v2 table format and specify the [primary 
key](../flink-ddl.md/#primary-key) or [identifier 
fields](../../spec.md#identifier-field-ids) when creating the table.
 
-```sql
-INSERT INTO tableName /*+ OPTIONS('upsert-enabled'='true') */
-...
-```
+    ```sql
+    INSERT INTO tableName /*+ OPTIONS('upsert-enabled'='true') */
+    ...
+    ```
 
 !!! info
     OVERWRITE and UPSERT can't be set together. In UPSERT mode, if the table 
is partitioned, the partition fields should be included in equality fields.
@@ -85,7 +85,7 @@ INSERT INTO tableName /*+ OPTIONS('upsert-enabled'='true') */
 Iceberg support writing to iceberg table from different DataStream input.
 
 
-### Appending data.
+### Appending data
 
 Flink supports writing `DataStream<RowData>` and `DataStream<Row>` to the sink 
iceberg table natively.
 
@@ -185,7 +185,7 @@ FlinkSink.builderFor(
 
 ### Branch Writes
 Writing to branches in Iceberg tables is also supported via the `toBranch` API 
in `FlinkSink`
-For more information on branches please refer to [branches](branching.md).
+For more information on branches please refer to [branches](../branching.md).
 ```java
 FlinkSink.forRowData(input)
     .tableLoader(tableLoader)
@@ -262,7 +262,7 @@ INSERT INTO tableName /*+ OPTIONS('upsert-enabled'='true') 
*/
 ...
 ```
 
-Check out all the options here: 
[write-options](flink-configuration.md#write-options) 
+Check out all the options here: 
[write-options](../flink-configuration.md#write-options) 
 
 ## Notes
 
diff --git a/docs/docs/flink.md b/docs/docs/flink.md
index bfad96840b..7f27a280eb 100644
--- a/docs/docs/flink.md
+++ b/docs/docs/flink.md
@@ -271,7 +271,7 @@ env.execute("Test Iceberg DataStream");
 
 ### Branch Writes
 Writing to branches in Iceberg tables is also supported via the `toBranch` API 
in `FlinkSink`
-For more information on branches please refer to [branches](branching.md).
+For more information on branches please refer to [branches](../branching.md).
 ```java
 FlinkSink.forRowData(input)
     .tableLoader(tableLoader)

Reply via email to