FANNG1 commented on code in PR #5676:
URL: https://github.com/apache/gravitino/pull/5676#discussion_r1867135621
##########
docs/hive-catalog-with-s3-adls-gcs.md:
##########
@@ -225,13 +240,17 @@ To access S3-stored tables using Spark, you need to
configure the SparkSession a
.config("spark.sql.catalog.{hive_catalog_name}.fs.s3a.endpoint",
getS3Endpoint)
.config("spark.sql.catalog.{hive_catalog_name}.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
- ## This two is for Azure Blob Storage(ADLS) only
+ // This two is for Azure Blob Storage(ADLS) only
Review Comment:
Azure Blob Storage is not ADLS, YES?
##########
docs/hive-catalog-with-s3-adls-gcs.md:
##########
@@ -11,14 +11,13 @@ license: "This software is licensed under the Apache
License version 2."
Since Hive 2.x, Hive has supported S3 as a storage backend, enabling users to
store and manage data in Amazon S3 directly through Hive. Gravitino enhances
this capability by supporting the Hive catalog with S3, allowing users to
efficiently manage the storage locations of files located in S3. This
integration simplifies data operations and enables seamless access to S3 data
from Hive queries.
-For ADLS (aka. Azure Blob Storage (ABS), or Azure Data Lake Storage (v2)), the
integration is similar to S3. The only difference is the configuration
properties for ADLS(see below).
+For ADLS (aka. Azure Blob Storage (ABS), or Azure Data Lake Storage (v2)) and
GCS (Google Cloud Storage), the integration is similar to S3. The only
difference is the configuration properties for ADLS and GCS (see below).
Review Comment:
rename file to `hive-catalog-with-cloud-storage`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]