This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/gravitino.git


The following commit(s) were added to refs/heads/branch-0.7 by this push:
     new 976e2fe0e [MINOR] polish Iceberg related document (#5446)
976e2fe0e is described below

commit 976e2fe0e69ca09f5de945efc83a293320812947
Author: github-actions[bot] 
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue Nov 5 11:02:18 2024 +0800

    [MINOR] polish Iceberg related document (#5446)
    
    ### What changes were proposed in this pull request?
    1. gcp -> GCP
    2. correct some configuration
    
    ### Why are the changes needed?
    polish the document
    
    ### Does this PR introduce _any_ user-facing change?
    no
    ### How was this patch tested?
    just document
    
    Co-authored-by: FANNG <[email protected]>
---
 docs/lakehouse-iceberg-catalog.md             | 4 ++--
 docs/spark-connector/spark-catalog-hive.md    | 2 +-
 docs/spark-connector/spark-catalog-iceberg.md | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/lakehouse-iceberg-catalog.md 
b/docs/lakehouse-iceberg-catalog.md
index f4cc48f89..28b9b37a9 100644
--- a/docs/lakehouse-iceberg-catalog.md
+++ b/docs/lakehouse-iceberg-catalog.md
@@ -83,7 +83,7 @@ Supports using static access-key-id and secret-access-key to 
access S3 data.
 For other Iceberg s3 properties not managed by Gravitino like `s3.sse.type`, 
you could config it directly by `gravitino.bypass.s3.sse.type`.
 
 :::info
-To configure the JDBC catalog backend, set the `warehouse` parameter to 
`s3://{bucket_name}/${prefix_name}`. For the Hive catalog backend, set 
`warehouse` to `s3a://{bucket_name}/${prefix_name}`. Additionally, download the 
[Iceberg AWS bundle]([Iceberg AWS 
bundle](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-aws-bundle))
 and place it in the `catalogs/lakehouse-iceberg/libs/` directory.
+To configure the JDBC catalog backend, set the `warehouse` parameter to 
`s3://{bucket_name}/${prefix_name}`. For the Hive catalog backend, set 
`warehouse` to `s3a://{bucket_name}/${prefix_name}`. Additionally, download the 
[Iceberg AWS 
bundle](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-aws-bundle)
 and place it in the `catalogs/lakehouse-iceberg/libs/` directory.
 :::
 
 #### OSS
@@ -116,7 +116,7 @@ For other Iceberg GCS properties not managed by Gravitino 
like `gcs.project-id`,
 Please make sure the credential file is accessible by Gravitino, like using 
`export 
GOOGLE_APPLICATION_CREDENTIALS=/xx/application_default_credentials.json` before 
Gravitino server is started.
 
 :::info
-Please set `warehouse` to `gs://{bucket_name}/${prefix_name}`, and download 
[Iceberg gcp bundle 
jar](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-gcp-bundle) 
and place it to `catalogs/lakehouse-iceberg/libs/`.
+Please set `warehouse` to `gs://{bucket_name}/${prefix_name}`, and download 
[Iceberg GCP bundle 
jar](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-gcp-bundle) 
and place it to `catalogs/lakehouse-iceberg/libs/`.
 :::
 
 #### Other storages
diff --git a/docs/spark-connector/spark-catalog-hive.md 
b/docs/spark-connector/spark-catalog-hive.md
index ba102c072..9510f9cc9 100644
--- a/docs/spark-connector/spark-catalog-hive.md
+++ b/docs/spark-connector/spark-catalog-hive.md
@@ -77,4 +77,4 @@ When using the `spark-sql` shell client, you must explicitly 
set the `spark.bypa
 
 ### S3
 
-Please refer to [Hive catalog with s3](../hive-catalog-with-s3.md) to set up a 
Hive catalog with s3 storage. To query the data stored in s3, you need to add 
s3 secret to the Spark configuration using 
`spark.sql.catalog.${hive_catalog_name}.fs.s3a.access.key` and 
`spark.sql.catalog.${iceberg_catalog_name}.s3.fs.s3a.secret.key`. Additionally, 
download [hadoop aws 
jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws), [aws 
java sdk jar](https://mvnrepository.com/artifact/com [...]
+Please refer to [Hive catalog with s3](../hive-catalog-with-s3.md) to set up a 
Hive catalog with s3 storage. To query the data stored in s3, you need to add 
s3 secret to the Spark configuration using 
`spark.sql.catalog.${hive_catalog_name}.fs.s3a.access.key` and 
`spark.sql.catalog.${hive_catalog_name}.fs.s3a.secret.key`. Additionally, 
download [hadoop aws 
jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws), [aws 
java sdk jar](https://mvnrepository.com/artifact/com.amazo [...]
diff --git a/docs/spark-connector/spark-catalog-iceberg.md 
b/docs/spark-connector/spark-catalog-iceberg.md
index dca23db6a..e4933a303 100644
--- a/docs/spark-connector/spark-catalog-iceberg.md
+++ b/docs/spark-connector/spark-catalog-iceberg.md
@@ -131,7 +131,7 @@ You need to add OSS secret key to the Spark configuration 
using `spark.sql.catal
 
 ### GCS
 
-No extra configuration is needed. Please make sure the credential file is 
accessible by Spark, like using `export 
GOOGLE_APPLICATION_CREDENTIALS=/xx/application_default_credentials.json`, and 
download [Iceberg gcp 
bundle](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-gcp-bundle)
 and place it to the classpath of Spark.
+No extra configuration is needed. Please make sure the credential file is 
accessible by Spark, like using `export 
GOOGLE_APPLICATION_CREDENTIALS=/xx/application_default_credentials.json`, and 
download [Iceberg GCP 
bundle](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-gcp-bundle)
 and place it to the classpath of Spark.
 
 ### Other storage
 

Reply via email to