This is an automated email from the ASF dual-hosted git repository.
fanng pushed a commit to branch branch-0.8
in repository https://gitbox.apache.org/repos/asf/gravitino.git
The following commit(s) were added to refs/heads/branch-0.8 by this push:
new bb5f4bd8e5 [#6229] docs: add fileset credential vending example
(#6232)
bb5f4bd8e5 is described below
commit bb5f4bd8e56d4edc0e8568bceda1b04c28bbef4c
Author: github-actions[bot]
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue Jan 14 22:01:14 2025 +0800
[#6229] docs: add fileset credential vending example (#6232)
### What changes were proposed in this pull request?
add credential vending document for fileset
### Why are the changes needed?
Fix: #6229
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
just document
Co-authored-by: FANNG <[email protected]>
---
docs/hadoop-catalog-with-adls.md | 26 +++++++++++++++++++++++---
docs/hadoop-catalog-with-gcs.md | 22 +++++++++++++++++++---
docs/hadoop-catalog-with-oss.md | 26 +++++++++++++++++++++++---
docs/hadoop-catalog-with-s3.md | 26 +++++++++++++++++++++++---
4 files changed, 88 insertions(+), 12 deletions(-)
diff --git a/docs/hadoop-catalog-with-adls.md b/docs/hadoop-catalog-with-adls.md
index 96126c6fab..880166776f 100644
--- a/docs/hadoop-catalog-with-adls.md
+++ b/docs/hadoop-catalog-with-adls.md
@@ -480,11 +480,31 @@ For other use cases, please refer to the [Gravitino
Virtual File System](./how-t
Since 0.8.0-incubating, Gravitino supports credential vending for ADLS
fileset. If the catalog has been [configured with
credential](./security/credential-vending.md), you can access ADLS fileset
without providing authentication information like `azure-storage-account-name`
and `azure-storage-account-key` in the properties.
-### How to create an ADLS Hadoop catalog with credential enabled
+### How to create an ADLS Hadoop catalog with credential vending
-Apart from configuration method in
[create-adls-hadoop-catalog](#configuration-for-a-adls-hadoop-catalog),
properties needed by
[adls-credential](./security/credential-vending.md#adls-credentials) should
also be set to enable credential vending for ADLS fileset.
+Apart from configuration method in
[create-adls-hadoop-catalog](#configuration-for-a-adls-hadoop-catalog),
properties needed by
[adls-credential](./security/credential-vending.md#adls-credentials) should
also be set to enable credential vending for ADLS fileset. Take `adls-token`
credential provider for example:
-### How to access ADLS fileset with credential
+```shell
+curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
+-H "Content-Type: application/json" -d '{
+ "name": "adls-catalog-with-token",
+ "type": "FILESET",
+ "comment": "This is a ADLS fileset catalog",
+ "provider": "hadoop",
+ "properties": {
+ "location": "abfss://[email protected]/path",
+ "azure-storage-account-name": "The account name of the Azure Blob Storage",
+ "azure-storage-account-key": "The account key of the Azure Blob Storage",
+ "filesystem-providers": "abs",
+ "credential-providers": "adls-token",
+ "azure-tenant-id":"The Azure tenant id",
+ "azure-client-id":"The Azure client id",
+ "azure-client-secret":"The Azure client secret key"
+ }
+}' http://localhost:8090/api/metalakes/metalake/catalogs
+```
+
+### How to access ADLS fileset with credential vending
If the catalog has been configured with credential, you can access ADLS
fileset without providing authentication information via GVFS Java/Python
client and Spark. Let's see how to access ADLS fileset with credential:
diff --git a/docs/hadoop-catalog-with-gcs.md b/docs/hadoop-catalog-with-gcs.md
index a3eb034b4f..5422047efd 100644
--- a/docs/hadoop-catalog-with-gcs.md
+++ b/docs/hadoop-catalog-with-gcs.md
@@ -459,11 +459,27 @@ For other use cases, please refer to the [Gravitino
Virtual File System](./how-t
Since 0.8.0-incubating, Gravitino supports credential vending for GCS fileset.
If the catalog has been [configured with
credential](./security/credential-vending.md), you can access GCS fileset
without providing authentication information like `gcs-service-account-file` in
the properties.
-### How to create a GCS Hadoop catalog with credential enabled
+### How to create a GCS Hadoop catalog with credential vending
-Apart from configuration method in
[create-gcs-hadoop-catalog](#configurations-for-a-gcs-hadoop-catalog),
properties needed by
[gcs-credential](./security/credential-vending.md#gcs-credentials) should also
be set to enable credential vending for GCS fileset.
+Apart from configuration method in
[create-gcs-hadoop-catalog](#configurations-for-a-gcs-hadoop-catalog),
properties needed by
[gcs-credential](./security/credential-vending.md#gcs-credentials) should also
be set to enable credential vending for GCS fileset. Take `gcs-token`
credential provider for example:
-### How to access GCS fileset with credential
+```shell
+curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
+-H "Content-Type: application/json" -d '{
+ "name": "gcs-catalog-with-token",
+ "type": "FILESET",
+ "comment": "This is a GCS fileset catalog",
+ "provider": "hadoop",
+ "properties": {
+ "location": "gs://bucket/root",
+ "gcs-service-account-file": "path_of_gcs_service_account_file",
+ "filesystem-providers": "gcs",
+ "credential-providers": "gcs-token"
+ }
+}' http://localhost:8090/api/metalakes/metalake/catalogs
+```
+
+### How to access GCS fileset with credential vending
If the catalog has been configured with credential, you can access GCS fileset
without providing authentication information via GVFS Java/Python client and
Spark. Let's see how to access GCS fileset with credential:
diff --git a/docs/hadoop-catalog-with-oss.md b/docs/hadoop-catalog-with-oss.md
index e63935c720..b9ef5f44e2 100644
--- a/docs/hadoop-catalog-with-oss.md
+++ b/docs/hadoop-catalog-with-oss.md
@@ -495,11 +495,31 @@ For other use cases, please refer to the [Gravitino
Virtual File System](./how-t
Since 0.8.0-incubating, Gravitino supports credential vending for OSS fileset.
If the catalog has been [configured with
credential](./security/credential-vending.md), you can access OSS fileset
without providing authentication information like `oss-access-key-id` and
`oss-secret-access-key` in the properties.
-### How to create a OSS Hadoop catalog with credential enabled
+### How to create an OSS Hadoop catalog with credential vending
-Apart from configuration method in
[create-oss-hadoop-catalog](#configuration-for-an-oss-hadoop-catalog),
properties needed by
[oss-credential](./security/credential-vending.md#oss-credentials) should also
be set to enable credential vending for OSS fileset.
+Apart from configuration method in
[create-oss-hadoop-catalog](#configuration-for-an-oss-hadoop-catalog),
properties needed by
[oss-credential](./security/credential-vending.md#oss-credentials) should also
be set to enable credential vending for OSS fileset. Take `oss-token`
credential provider for example:
-### How to access OSS fileset with credential
+```shell
+curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
+-H "Content-Type: application/json" -d '{
+ "name": "oss-catalog-with-token",
+ "type": "FILESET",
+ "comment": "This is a OSS fileset catalog",
+ "provider": "hadoop",
+ "properties": {
+ "location": "oss://bucket/root",
+ "oss-access-key-id": "access_key",
+ "oss-secret-access-key": "secret_key",
+ "oss-endpoint": "http://oss-cn-hangzhou.aliyuncs.com",
+ "filesystem-providers": "oss",
+ "credential-providers": "oss-token",
+ "oss-region":"oss-cn-hangzhou",
+ "oss-role-arn":"The ARN of the role to access the OSS data"
+ }
+}' http://localhost:8090/api/metalakes/metalake/catalogs
+```
+
+### How to access OSS fileset with credential vending
If the catalog has been configured with credential, you can access OSS fileset
without providing authentication information via GVFS Java/Python client and
Spark. Let's see how to access OSS fileset with credential:
diff --git a/docs/hadoop-catalog-with-s3.md b/docs/hadoop-catalog-with-s3.md
index 7d56f2b9ab..f138276189 100644
--- a/docs/hadoop-catalog-with-s3.md
+++ b/docs/hadoop-catalog-with-s3.md
@@ -498,11 +498,31 @@ For more use cases, please refer to the [Gravitino
Virtual File System](./how-to
Since 0.8.0-incubating, Gravitino supports credential vending for S3 fileset.
If the catalog has been [configured with
credential](./security/credential-vending.md), you can access S3 fileset
without providing authentication information like `s3-access-key-id` and
`s3-secret-access-key` in the properties.
-### How to create a S3 Hadoop catalog with credential enabled
+### How to create a S3 Hadoop catalog with credential vending
-Apart from configuration method in
[create-s3-hadoop-catalog](#configurations-for-s3-hadoop-catalog), properties
needed by [s3-credential](./security/credential-vending.md#s3-credentials)
should also be set to enable credential vending for S3 fileset.
+Apart from configuration method in
[create-s3-hadoop-catalog](#configurations-for-s3-hadoop-catalog), properties
needed by [s3-credential](./security/credential-vending.md#s3-credentials)
should also be set to enable credential vending for S3 fileset. Take `s3-token`
credential provider for example:
-### How to access S3 fileset with credential
+```shell
+curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
+-H "Content-Type: application/json" -d '{
+ "name": "s3-catalog-with-token",
+ "type": "FILESET",
+ "comment": "This is a S3 fileset catalog",
+ "provider": "hadoop",
+ "properties": {
+ "location": "s3a://bucket/root",
+ "s3-access-key-id": "access_key",
+ "s3-secret-access-key": "secret_key",
+ "s3-endpoint": "http://s3.ap-northeast-1.amazonaws.com",
+ "filesystem-providers": "s3",
+ "credential-providers": "s3-token",
+ "s3-region":"ap-northeast-1",
+ "s3-role-arn":"The ARN of the role to access the S3 data"
+ }
+}' http://localhost:8090/api/metalakes/metalake/catalogs
+```
+
+### How to access S3 fileset with credential vending
If the catalog has been configured with credential, you can access S3 fileset
without providing authentication information via GVFS Java/Python client and
Spark. Let's see how to access S3 fileset with credential: