rdblue commented on a change in pull request #3253:
URL: https://github.com/apache/iceberg/pull/3253#discussion_r725159813



##########
File path: site/docs/java-api-quickstart.md
##########
@@ -23,12 +23,26 @@ Tables are created using either a 
[`Catalog`](./javadoc/master/index.html?org/ap
 
 ### Using a Hive catalog
 
-The Hive catalog connects to a Hive MetaStore to keep track of Iceberg tables. 
This example uses Spark's Hadoop configuration to get a Hive catalog:
+The Hive catalog connects to a Hive MetaStore to keep track of Iceberg tables.
+You can initialize a Hive Catalog with a name and some properties.
+(see: [Catalog 
properties](https://iceberg.apache.org/configuration/#catalog-properties))
 
 ```java
 import org.apache.iceberg.hive.HiveCatalog;
 
-Catalog catalog = new HiveCatalog(spark.sparkContext().hadoopConfiguration());
+Catalog catalog = new HiveCatalog();
+
+Map <String, String> properties = new HashMap<String, String>();
+properties.put("warehouse", "...");
+properties.put("uri", "...");
+
+catalog.initialize("hive", properties);
+```
+
+Alternatively, you can configure the Hive Catalog using Spark's Hadoop 
configuration.

Review comment:
       I wouldn't say that adding the conf is an alternative because that 
implies that you don't need to pass the catalog properties. Catalog properties 
are separate config so you should always pass them to configure the catalog. 
The Hive connection URI and warehouse location are defaulted for Hive, but 
that's not a normal thing. Other catalogs pretty much ignore the Configuration 
except to load a Hadoop FileSystem internally.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to