yuqi1129 commented on code in PR #5065:
URL: https://github.com/apache/gravitino/pull/5065#discussion_r1793570429


##########
docs/spark-connector/spark-catalog-hive.md:
##########
@@ -70,4 +70,11 @@ Gravitino catalog property names with the prefix 
`spark.bypass.` are passed to S
 
 :::caution
 When using the `spark-sql` shell client, you must explicitly set the 
`spark.bypass.spark.sql.hive.metastore.jars` in the Gravitino Hive catalog 
properties. Replace the default `builtin` value with the appropriate setting 
for your setup.
-:::
\ No newline at end of file
+:::
+
+
+## Storage
+
+### S3
+
+Please refer to [Hive catalog with s3](../hive-catalog-with-s3.md) to set up a 
Hive catalog with s3 storage. To query the data stored in s3, you need to add 
s3 secret to the Spark configuration using 
`spark.sql.catalog.${hive_catalog_name}.fs.s3a.access.key` and 
`spark.sql.catalog.${iceberg_catalog_name}.s3.fs.s3a.secret.key`. Additionally, 
download [hadoop aws 
jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws) , [aws 
java sdk 
jar](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle) and 
place them in the classpath of Spark.

Review Comment:
   > hadoop-aws) ,
   
   There is a space before ',', please remove it. 
   
   ditto



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to