This is an automated email from the ASF dual-hosted git repository.
fanng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/gravitino.git
The following commit(s) were added to refs/heads/main by this push:
new 2f0601c6b [MINOR] docs: Polish Gravitino Flink connector document
(#6315)
2f0601c6b is described below
commit 2f0601c6b3ee7c74625dd60c1a135a719caf6a67
Author: FANNG <[email protected]>
AuthorDate: Fri Jan 17 17:14:38 2025 +0800
[MINOR] docs: Polish Gravitino Flink connector document (#6315)
### What changes were proposed in this pull request?
polish Flink document
### Why are the changes needed?
Polish the document
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
just document
---
docs/flink-connector/flink-catalog-hive.md | 2 +-
docs/flink-connector/flink-catalog-iceberg.md | 21 ++++++++++----------
docs/flink-connector/flink-catalog-paimon.md | 28 +++++++++++++--------------
docs/flink-connector/flink-connector.md | 2 ++
4 files changed, 28 insertions(+), 25 deletions(-)
diff --git a/docs/flink-connector/flink-catalog-hive.md
b/docs/flink-connector/flink-catalog-hive.md
index ae5581706..9fc9349e3 100644
--- a/docs/flink-connector/flink-catalog-hive.md
+++ b/docs/flink-connector/flink-catalog-hive.md
@@ -34,7 +34,7 @@ Supports most DDL and DML operations in Flink SQL, except
such operations:
```sql
// Suppose hive_a is the Hive catalog name managed by Gravitino
-USE hive_a;
+USE CATALOG hive_a;
CREATE DATABASE IF NOT EXISTS mydatabase;
USE mydatabase;
diff --git a/docs/flink-connector/flink-catalog-iceberg.md
b/docs/flink-connector/flink-catalog-iceberg.md
index 54d7c0879..76142369f 100644
--- a/docs/flink-connector/flink-catalog-iceberg.md
+++ b/docs/flink-connector/flink-catalog-iceberg.md
@@ -32,11 +32,12 @@ To enable the Flink connector, you must download the
Iceberg Flink runtime JAR a
- `CREATE TABLE LIKE` clause
## SQL example
+
```sql
-- Suppose iceberg_a is the Iceberg catalog name managed by Gravitino
-USE iceberg_a;
+USE CATALOG iceberg_a;
CREATE DATABASE IF NOT EXISTS mydatabase;
USE mydatabase;
@@ -59,15 +60,15 @@ SELECT * FROM sample WHERE data = 'B';
The Gravitino Flink connector transforms the following properties in a catalog
to Flink connector configuration.
-| Gravitino catalog property name | Flink Iceberg connector configuration |
Description
| Since Version |
-|---------------------------------|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
-| `catalog-backend` | `catalog-type` |
Catalog backend type, currently, only `Hive` Catalog is supported, `JDBC` and
`Rest` in Continuous Validation
| 0.8.0-incubating |
-| `uri` | `uri` |
Catalog backend URI
| 0.8.0-incubating |
-| `warehouse` | `warehouse` |
Catalog backend warehouse
| 0.8.0-incubating |
-| `io-impl` | `io-impl` |
The IO implementation for `FileIO` in Iceberg.
| 0.8.0-incubating |
-| `oss-endpoint` | `oss.endpoint` |
The endpoint of Aliyun OSS service.
| 0.8.0-incubating |
-| `oss-access-key-id` | `client.access-key-id` |
The static access key ID used to access OSS data.
| 0.8.0-incubating |
-| `oss-secret-access-key` | `client.access-key-secret` |
The static secret access key used to access OSS data.
| 0.8.0-incubating |
+| Gravitino catalog property name | Flink Iceberg connector configuration |
Description
| Since Version |
+|---------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------|------------------|
+| `catalog-backend` | `catalog-type` |
Catalog backend type, currently, only `Hive` Catalog is supported, `JDBC` and
`Rest` in Continuous Validation | 0.8.0-incubating |
+| `uri` | `uri` |
Catalog backend URI
| 0.8.0-incubating |
+| `warehouse` | `warehouse` |
Catalog backend warehouse
| 0.8.0-incubating |
+| `io-impl` | `io-impl` |
The IO implementation for `FileIO` in Iceberg.
| 0.8.0-incubating |
+| `oss-endpoint` | `oss.endpoint` |
The endpoint of Aliyun OSS service.
| 0.8.0-incubating |
+| `oss-access-key-id` | `client.access-key-id` |
The static access key ID used to access OSS data.
| 0.8.0-incubating |
+| `oss-secret-access-key` | `client.access-key-secret` |
The static secret access key used to access OSS data.
| 0.8.0-incubating |
Gravitino catalog property names with the prefix `flink.bypass.` are passed to
Flink iceberg connector. For example, using `flink.bypass.clients` to pass the
`clients` to the Flink iceberg connector.
diff --git a/docs/flink-connector/flink-catalog-paimon.md
b/docs/flink-connector/flink-catalog-paimon.md
index 87b3451a8..9a4f4f46a 100644
--- a/docs/flink-connector/flink-catalog-paimon.md
+++ b/docs/flink-connector/flink-catalog-paimon.md
@@ -6,6 +6,7 @@ license: "This software is licensed under the Apache License
version 2."
---
This document provides a comprehensive guide on configuring and using Apache
Gravitino Flink connector to access the Paimon catalog managed by the Gravitino
server.
+
## Capabilities
### Supported Paimon Table Types
@@ -32,7 +33,7 @@ Supports most DDL and DML operations in Flink SQL, except
such operations:
* Paimon 0.8
-Higher version like 0.9 or above may also supported but have not been tested
fully.
+Higher version like 0.9 or above may also support but have not been tested
fully.
## Getting Started
@@ -40,19 +41,18 @@ Higher version like 0.9 or above may also supported but
have not been tested ful
Place the following JAR files in the lib directory of your Flink installation:
-* paimon-flink-1.18-0.8.2.jar
-
-* gravitino-flink-connector-runtime-\${flinkMajorVersion}_$scalaVersion.jar
+- `paimon-flink-1.18-${paimon-version}.jar`
+- `gravitino-flink-connector-runtime-1.18_2.12-${gravitino-version}.jar`
### SQL Example
```sql
-- Suppose paimon_catalog is the Paimon catalog name managed by Gravitino
-use catalog paimon_catalog;
+USE CATALOG paimon_catalog;
-- Execute statement succeed.
-show databases;
+SHOW DATABASES;
-- +---------------------+
-- | database name |
-- +---------------------+
@@ -71,7 +71,7 @@ CREATE TABLE paimon_tabla_a (
bb BIGINT
);
-show tables;
+SHOW TABLES;
-- +----------------+
-- | table name |
-- +----------------+
@@ -79,15 +79,15 @@ show tables;
-- +----------------+
-select * from paimon_table_a;
+SELECT * FROM paimon_table_a;
-- Empty set
-insert into paimon_table_a(aa,bb) values(1,2);
+INSERT INTO paimon_table_a(aa,bb) VALUES(1,2);
-- [INFO] Submitting SQL update statement to the cluster...
-- [INFO] SQL update statement has been successfully submitted to the cluster:
-- Job ID: 74c0c678124f7b452daf08c399d0fee2
-select * from paimon_table_a;
+SELECT * FROM paimon_table_a;
-- +----+----+
-- | aa | bb |
-- +----+----+
@@ -100,9 +100,9 @@ select * from paimon_table_a;
Gravitino Flink connector will transform below property names which are
defined in catalog properties to Flink Paimon connector configuration.
-| Gravitino catalog property name | Flink Paimon connector configuration |
Description
| Since Version |
-|---------------------------------|----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|
-| `catalog-backend` | `metastore` |
Catalog backend of Gravitino Paimon catalog. Supports `filesystem`.
| 0.8.0-incubating |
-| `warehouse` | `warehouse` |
Warehouse directory of catalog. `file:///user/hive/warehouse-paimon/` for local
fs, `hdfs://namespace/hdfs/path` for HDFS , `s3://{bucket-name}/path/` for S3
or `oss://{bucket-name}/path` for Aliyun OSS | 0.8.0-incubating |
+| Gravitino catalog property name | Flink Paimon connector configuration |
Description
| Since Version |
+|---------------------------------|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|
+| `catalog-backend` | `metastore` |
Catalog backend of Gravitino Paimon catalog. Supports `filesystem`.
| 0.8.0-incubating |
+| `warehouse` | `warehouse` |
Warehouse directory of catalog. `file:///user/hive/warehouse-paimon/` for local
fs, `hdfs://namespace/hdfs/path` for HDFS , `s3://{bucket-name}/path/` for S3
or `oss://{bucket-name}/path` for Aliyun OSS | 0.8.0-incubating |
Gravitino catalog property names with the prefix `flink.bypass.` are passed to
Flink Paimon connector. For example, using `flink.bypass.clients` to pass the
`clients` to the Flink Paimon connector.
diff --git a/docs/flink-connector/flink-connector.md
b/docs/flink-connector/flink-connector.md
index 3b43f5c49..e6109bb37 100644
--- a/docs/flink-connector/flink-connector.md
+++ b/docs/flink-connector/flink-connector.md
@@ -13,6 +13,8 @@ This capability allows users to perform federation queries,
accessing data from
## Capabilities
1. Supports [Hive catalog](flink-catalog-hive.md)
+1. Supports [Iceberg catalog](flink-catalog-iceberg.md)
+1. Supports [Paimon catalog](flink-catalog-paimon.md)
2. Supports most DDL and DML SQLs.
## Requirement